Sample records for allowable time step

  1. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  2. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  3. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  4. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    NASA Astrophysics Data System (ADS)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  5. Enforcing the Courant–Friedrichs–Lewy condition in explicitly conservative local time stepping schemes

    DOE PAGES

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-01-30

    In this study, an optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubicmore » "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a condition on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.« less

  6. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  7. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  8. Implicit method for the computation of unsteady flows on unstructured grids

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Mavriplis, D. J.

    1995-01-01

    An implicit method for the computation of unsteady flows on unstructured grids is presented. Following a finite difference approximation for the time derivative, the resulting nonlinear system of equations is solved at each time step by using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Inviscid and viscous unsteady flows are computed to validate the procedure. The issue of the mass matrix which arises with vertex-centered finite volume schemes is addressed. The present formulation allows the mass matrix to be inverted indirectly. A mesh point movement and reconnection procedure is described that allows the grids to evolve with the motion of bodies. As an example of flow over bodies in relative motion, flow over a multi-element airfoil system undergoing deployment is computed.

  9. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  10. The Story of the Calvin Cycle: Bringing Carbon Fixation to Life

    ERIC Educational Resources Information Center

    Firooznia, Fardad

    2007-01-01

    A fun, simple, musical role-playing exercise that allows students to actively step through and visualize the biochemical steps of the Calvin cycle. This musical can easily be completed in about 15 minutes with more than enough time during a 50-minute class period to review the steps and clarify the process further.

  11. A Semi-implicit Method for Time Accurate Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2001-11-01

    A semi-implicit method for time accurate simulation of compressible flow is presented. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity. Centered discretization in both time and space allows the method to achieve zero artificial attenuation of acoustic waves. The method is an extension of the standard low Mach number pressure correction method to the compressible Navier-Stokes equations, and the main feature of the method is the solution of a Helmholtz type pressure correction equation similar to that of Demirdžić et al. (Int. J. Num. Meth. Fluids, Vol. 16, pp. 1029-1050, 1993). The method is attractive for simulation of acoustic combustion instabilities in practical combustors. In these flows, the Mach number is low; therefore the time step allowed by the convective CFL limitation is significantly larger than that allowed by the acoustic CFL limitation, resulting in significant efficiency gains. Also, the method's property of zero artificial attenuation of acoustic waves is important for accurate simulation of the interaction between acoustic waves and the combustion process. The method has been implemented in a large eddy simulation code, and results from several test cases will be presented.

  12. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Raj, R.; Hamm, N. A. S.; van der Tol, C.; Stein, A.

    2015-08-01

    Gross primary production (GPP), separated from flux tower measurements of net ecosystem exchange (NEE) of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH) model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC) simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  13. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  14. Interactive real time flow simulations

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1990-01-01

    An interactive real time flow simulation technique is developed for an unsteady channel flow. A finite-volume algorithm in conjunction with a Runge-Kutta time stepping scheme was developed for two-dimensional Euler equations. A global time step was used to accelerate convergence of steady-state calculations. A raster image generation routine was developed for high speed image transmission which allows the user to have direct interaction with the solution development. In addition to theory and results, the hardware and software requirements are discussed.

  15. Multiple time step integrators in ab initio molecular dynamics.

    PubMed

    Luehr, Nathan; Markland, Thomas E; Martínez, Todd J

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  16. On the correct use of stepped-sine excitations for the measurement of time-varying bioimpedance.

    PubMed

    Louarroudi, E; Sanchez, B

    2017-02-01

    When a linear time-varying (LTV) bioimpedance is measured using stepped-sine excitations, a compromise must be made: the temporal distortions affecting the data depend on the experimental time, which in turn sets the data accuracy and limits the temporal bandwidth of the system that needs to be measured. Here, the experimental time required to measure linear time-invariant bioimpedance with a specified accuracy is analyzed for different stepped-sine excitation setups. We provide simple equations that allow the reader to know whether LTV bioimpedance can be measured through repeated time- invariant stepped-sine experiments. Bioimpedance technology is on the rise thanks to a plethora of healthcare monitoring applications. The results presented can help to avoid distortions in the data while measuring accurately non-stationary physiological phenomena. The impact of the work presented is broad, including the potential of enhancing bioimpedance studies and healthcare devices using bioimpedance technology.

  17. A new one-step procedure for pulmonary valve implantation of the melody valve: Simultaneous prestenting and valve implantation.

    PubMed

    Boudjemline, Younes

    2018-01-01

    To describe a new modification, the one-step procedure, that allows interventionists to pre-stent and implant a Melody valve simultaneously. Percutaneous pulmonary valve implantation (PPVI) is the standard of care for managing patients with dysfunctional right ventricular outflow tract, and the approach is standardized. Patients undergoing PPVI using the one-step procedure were identified in our database. Procedural data and radiation exposure were compared to those in a matched group of patients who underwent PPVI using the conventional two-step procedure. Between January 2016 and January 2017, PPVI was performed in 27 patients (median age/range, 19.1/10-55 years) using the one-step procedure involving manual crimping of one to three bare metal stents over the Melody valve. The stent and Melody valve were delivered successfully using the Ensemble delivery system. No complications occurred. All patients had excellent hemodynamic results (median/range post-PPVI right ventricular to pulmonary artery gradient, 9/0-20 mmHg). Valve function was excellent. Median procedural and fluoroscopic times were 56 and 10.2 min, respectively, which significantly differed from those of the two-step procedure group. Similarly, the dose area product (DAP), and radiation time were statistically lower in the one-step group than in the two-step group (P < 0.001 for all variables). After a median follow-up of 8 months (range, 3-14.7), no patient underwent reintervention, and no device dysfunction was observed. The one-step procedure is a safe modification that allows interventionists to prestent and implants the Melody valve simultaneously. It significantly reduces procedural and fluoroscopic times, and radiation exposure. © 2017 Wiley Periodicals, Inc.

  18. A local time stepping algorithm for GPU-accelerated 2D shallow water models

    NASA Astrophysics Data System (ADS)

    Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo

    2018-01-01

    In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.

  19. The "Motor" in Implicit Motor Sequence Learning: A Foot-stepping Serial Reaction Time Task.

    PubMed

    Du, Yue; Clark, Jane E

    2018-05-03

    This protocol describes a modified serial reaction time (SRT) task used to study implicit motor sequence learning. Unlike the classic SRT task that involves finger-pressing movements while sitting, the modified SRT task requires participants to step with both feet while maintaining a standing posture. This stepping task necessitates whole body actions that impose postural challenges. The foot-stepping task complements the classic SRT task in several ways. The foot-stepping SRT task is a better proxy for the daily activities that require ongoing postural control, and thus may help us better understand sequence learning in real-life situations. In addition, response time serves as an indicator of sequence learning in the classic SRT task, but it is unclear whether response time, reaction time (RT) representing mental process, or movement time (MT) reflecting the movement itself, is a key player in motor sequence learning. The foot-stepping SRT task allows researchers to disentangle response time into RT and MT, which may clarify how motor planning and movement execution are involved in sequence learning. Lastly, postural control and cognition are interactively related, but little is known about how postural control interacts with learning motor sequences. With a motion capture system, the movement of the whole body (e.g., the center of mass (COM)) can be recorded. Such measures allow us to reveal the dynamic processes underlying discrete responses measured by RT and MT, and may aid in elucidating the relationship between postural control and the explicit and implicit processes involved in sequence learning. Details of the experimental set-up, procedure, and data processing are described. The representative data are adopted from one of our previous studies. Results are related to response time, RT, and MT, as well as the relationship between the anticipatory postural response and the explicit processes involved in implicit motor sequence learning.

  20. Comparison of step-by-step kinematics of resisted, assisted and unloaded 20-m sprint runs.

    PubMed

    van den Tillaar, Roland; Gamble, Paul

    2018-03-26

    This investigation examined step-by-step kinematics of sprint running acceleration. Using a randomised counterbalanced approach, 37 female team handball players (age 17.8 ± 1.6 years, body mass 69.6 ± 9.1 kg, height 1.74 ± 0.06 m) performed resisted, assisted and unloaded 20-m sprints within a single session. 20-m sprint times and step velocity, as well as step length, step frequency, contact and flight times of each step were evaluated for each condition with a laser gun and an infrared mat. Almost all measured parameters were altered for each step under the resisted and assisted sprint conditions (η 2  ≥ 0.28). The exception was step frequency, which did not differ between assisted and normal sprints. Contact time, flight time and step frequency at almost each step were different between 'fast' vs. 'slow' sub-groups (η 2  ≥ 0.22). Nevertheless overall both groups responded similarly to the respective sprint conditions. No significant differences in step length were observed between groups for the respective condition. It is possible that continued exposure to assisted sprinting might allow the female team-sports players studied to adapt their coordination to the 'over-speed' condition and increase step frequency. It is notable that step-by-step kinematics in these sprints were easy to obtain using relatively inexpensive equipment with possibilities of direct feedback.

  1. EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES

    PubMed Central

    Börgers, Christoph; Nectow, Alexander R.

    2013-01-01

    Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276

  2. Melatonin: a universal time messenger.

    PubMed

    Erren, Thomas C; Reiter, Russel J

    2015-01-01

    Temporal organization plays a key role in humans, and presumably all species on Earth. A core building block of the chronobiological architecture is the master clock, located in the suprachi asmatic nuclei [SCN], which organizes "when" things happen in sub-cellular biochemistry, cells, organs and organisms, including humans. Conceptually, time messenging should follow a 5 step-cascade. While abundant evidence suggests how steps 1 through 4 work, step 5 of "how is central time information transmitted througout the body?" awaits elucidation. Step 1: Light provides information on environmental (external) time; Step 2: Ocular interfaces between light and biological (internal) time are intrinsically photosensitive retinal ganglion cells [ipRGS] and rods and cones; Step 3: Via the retinohypothalamic tract external time information reaches the light-dependent master clock in the brain, viz the SCN; Step 4: The SCN translate environmental time information into biological time and distribute this information to numerous brain structures via a melanopsin-based network. Step 5: Melatonin, we propose, transmits, or is a messenger of, internal time information to all parts of the body to allow temporal organization which is orchestrated by the SCN. Key reasons why we expect melatonin to have such role include: First, melatonin, as the chemical expression of darkness, is centrally involved in time- and timing-related processes such as encoding clock and calendar information in the brain; Second, melatonin travels throughout the body without limits and is thus a ubiquitous molecule. The chemial conservation of melatonin in all tested species could make this molecule a candidate for a universal time messenger, possibly constituting a legacy of an all-embracing evolutionary history.

  3. Facilities Planning for Small Colleges.

    ERIC Educational Resources Information Center

    O'Neill, Joseph P.; And Others

    This second publication in a three-part series called "Alternative Futures" is essentially a workbook that, followed step by step, allows a college to see how its use of space has changed over time. Especially designed for small colleges, the kit makes use of the information that is routinely collected, such as annual financial statements and…

  4. An Analysis of a Puff Dispersion Model for a Coastal Region.

    DTIC Science & Technology

    1982-06-01

    gril is determined by computing their movement for a finite time step using a measured wind field. The growth and buoyancy of the puffs are computed...advection step. The grid concentrations can be allowed to accumulate or simply be updated with the lat- est instantaneous value. & minimum gril concentration

  5. Dokan Hydropower Reservoir Operation under Stochastic Conditions as Regards the Inflows and the Energy Demands

    NASA Astrophysics Data System (ADS)

    Izat Rashed, Ghamgeen

    2018-03-01

    This paper presented a way of obtaining certain operating rules on time steps for the management of a large reservoir operation with a peak hydropower plant associated to it. The rules were allowed to have the form of non-linear regression equations which link a decision variable (here the water volume in the reservoir at the end of the time step) by several parameters influencing it. This paper considered the Dokan hydroelectric development KR-Iraq, which operation data are available for. It was showing that both the monthly average inflows and the monthly power demands are random variables. A model of deterministic dynamic programming intending the minimization of the total amount of the squares differences between the demanded energy and the generated energy is run with a multitude of annual scenarios of inflows and monthly required energies. The operating rules achieved allow the efficient and safe management of the operation and it is quietly and accurately known the forecast of the inflow and of the energy demand on the next time step.

  6. Operation of Dokan Reservoir under Stochastic Conditions as Regards the Inflows and the Energy Demands

    NASA Astrophysics Data System (ADS)

    Rashed, G. I.

    2018-02-01

    This paper presented a way of obtaining certain operating rules on time steps for the management of a large reservoir operation with a peak hydropower plant associated to it. The rules were allowed to have the form of non-linear regression equations which link a decision variable (here the water volume in the reservoir at the end of the time step) by several parameters influencing it. This paper considered the Dokan hydroelectric development KR-Iraq, which operation data are available for. It was showing that both the monthly average inflows and the monthly power demands are random variables. A model of deterministic dynamic programming intending the minimization of the total amount of the squares differences between the demanded energy and the generated energy is run with a multitude of annual scenarios of inflows and monthly required energies. The operating rules achieved allow the efficient and safe management of the operation and it is quietly and accurately known the forecast of the inflow and of the energy demand on the next time step.

  7. Quantization of charged fields in the presence of critical potential steps

    NASA Astrophysics Data System (ADS)

    Gavrilov, S. P.; Gitman, D. M.

    2016-02-01

    QED with strong external backgrounds that can create particles from the vacuum is well developed for the so-called t -electric potential steps, which are time-dependent external electric fields that are switched on and off at some time instants. However, there exist many physically interesting situations where external backgrounds do not switch off at the time infinity. E.g., these are time-independent nonuniform electric fields that are concentrated in restricted space areas. The latter backgrounds represent a kind of spatial x -electric potential steps for charged particles. They can also create particles from the vacuum, the Klein paradox being closely related to this process. Approaches elaborated for treating quantum effects in the t -electric potential steps are not directly applicable to the x -electric potential steps and their generalization for x -electric potential steps was not sufficiently developed. We believe that the present work represents a consistent solution of the latter problem. We have considered a canonical quantization of the Dirac and scalar fields with x -electric potential step and have found in- and out-creation and annihilation operators that allow one to have particle interpretation of the physical system under consideration. To identify in- and out-operators we have performed a detailed mathematical and physical analysis of solutions of the relativistic wave equations with an x -electric potential step with subsequent QFT analysis of correctness of such an identification. We elaborated a nonperturbative (in the external field) technique that allows one to calculate all characteristics of zero-order processes, such, for example, scattering, reflection, and electron-positron pair creation, without radiation corrections, and also to calculate Feynman diagrams that describe all characteristics of processes with interaction between the in-, out-particles and photons. These diagrams have formally the usual form, but contain special propagators. Expressions for these propagators in terms of in- and out-solutions are presented. We apply the elaborated approach to two popular exactly solvable cases of x -electric potential steps, namely, to the Sauter potential and to the Klein step.

  8. Adaptive time stepping for fluid-structure interaction solvers

    DOE PAGES

    Mayr, M.; Wall, W. A.; Gee, M. W.

    2017-12-22

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  9. Adaptive time stepping for fluid-structure interaction solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayr, M.; Wall, W. A.; Gee, M. W.

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  10. 40 CFR 35.2025 - Allowance and advance of allowance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... advance of allowance. (a) Allowance. Step 2+3 and Step 3 grant agreements will include an allowance for facilities planning and design of the project and Step 7 agreements will include an allowance for facility... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Allowance and advance of allowance. 35...

  11. The Satellite Test of the Equivalence Principle (STEP)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    STEP will carry concentric test masses to Earth orbit to test a fundamental assumption underlying Einstein's theory of general relativity: that gravitational mass is equivalent to inertial mass. STEP is a 21st-century version of the test that Galileo is said to have performed by dropping a carnon ball and a musket ball simultaneously from the top of the Leaning Tower of Pisa to compare their accelerations. During the STEP experiment, four pairs of test masses will be falling around the Earth, and their accelerations will be measured by superconducting quantum interference devices (SQUIDS). The extended time sensitivity of the instruments will allow the measurements to be a million times more accurate than those made in modern ground-based tests.

  12. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  13. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  14. Mind wandering at the fingertips: automatic parsing of subjective states based on response time variability

    PubMed Central

    Bastian, Mikaël; Sackur, Jérôme

    2013-01-01

    Research from the last decade has successfully used two kinds of thought reports in order to assess whether the mind is wandering: random thought-probes and spontaneous reports. However, none of these two methods allows any assessment of the subjective state of the participant between two reports. In this paper, we present a step by step elaboration and testing of a continuous index, based on response time variability within Sustained Attention to Response Tasks (N = 106, for a total of 10 conditions). We first show that increased response time variability predicts mind wandering. We then compute a continuous index of response time variability throughout full experiments and show that the temporal position of a probe relative to the nearest local peak of the continuous index is predictive of mind wandering. This suggests that our index carries information about the subjective state of the subject even when he or she is not probed, and opens the way for on-line tracking of mind wandering. Finally we proceed a step further and infer the internal attentional states on the basis of the variability of response times. To this end we use the Hidden Markov Model framework, which allows us to estimate the durations of on-task and off-task episodes. PMID:24046753

  15. Spatial and Temporal Control Contribute to Step Length Asymmetry during Split-Belt Adaptation and Hemiparetic Gait

    PubMed Central

    Finley, James M.; Long, Andrew; Bastian, Amy J.; Torres-Oviedo, Gelsy

    2014-01-01

    Background Step length asymmetry (SLA) is a common hallmark of gait post-stroke. Though conventionally viewed as a spatial deficit, SLA can result from differences in where the feet are placed relative to the body (spatial strategy), the timing between foot-strikes (step time strategy), or the velocity of the body relative to the feet (step velocity strategy). Objective The goal of this study was to characterize the relative contributions of each of these strategies to SLA. Methods We developed an analytical model that parses SLA into independent step position, step time, and step velocity contributions. This model was validated by reproducing SLA values for twenty-five healthy participants when their natural symmetric gait was perturbed on a split-belt treadmill moving at either a 2:1 or 3:1 belt-speed ratio. We then applied the validated model to quantify step position, step time, and step velocity contributions to SLA in fifteen stroke survivors while walking at their self-selected speed. Results SLA was predicted precisely by summing the derived contributions, regardless of the belt-speed ratio. Although the contributions to SLA varied considerably across our sample of stroke survivors, the step position contribution tended to oppose the other two – possibly as an attempt to minimize the overall SLA. Conclusions Our results suggest that changes in where the feet are placed or changes in interlimb timing could be used as compensatory strategies to reduce overall SLA in stroke survivors. These results may allow clinicians and researchers to identify patient-specific gait abnormalities and personalize their therapeutic approaches accordingly. PMID:25589580

  16. Extension of a streamwise upwind algorithm to a moving grid system

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.

    1990-01-01

    A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.

  17. The Chimera II Real-Time Operating System for advanced sensor-based control applications

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1992-01-01

    Attention is given to the Chimera II Real-Time Operating System, which has been developed for advanced sensor-based control applications. The Chimera II provides a high-performance real-time kernel and a variety of IPC features. The hardware platform required to run Chimera II consists of commercially available hardware, and allows custom hardware to be easily integrated. The design allows it to be used with almost any type of VMEbus-based processors and devices. It allows radially differing hardware to be programmed using a common system, thus providing a first and necessary step towards the standardization of reconfigurable systems that results in a reduction of development time and cost.

  18. Estimating allowable-cut by area-scheduling

    Treesearch

    William B. Leak

    2011-01-01

    Estimation of the regulated allowable-cut is an important step in placing a forest property under management and ensuring a continued supply of timber over time. Regular harvests also provide for the maintenance of needed wildlife habitat. There are two basic approaches: (1) volume, and (2) area/volume regulation, with many variations of each. Some require...

  19. Does a microprocessor-controlled prosthetic knee affect stair ascent strategies in persons with transfemoral amputation?

    PubMed

    Aldridge Whitehead, Jennifer M; Wolf, Erik J; Scoville, Charles R; Wilken, Jason M

    2014-10-01

    Stair ascent can be difficult for individuals with transfemoral amputation because of the loss of knee function. Most individuals with transfemoral amputation use either a step-to-step (nonreciprocal, advancing one stair at a time) or skip-step strategy (nonreciprocal, advancing two stairs at a time), rather than a step-over-step (reciprocal) strategy, because step-to-step and skip-step allow the leading intact limb to do the majority of work. A new microprocessor-controlled knee (Ottobock X2(®)) uses flexion/extension resistance to allow step-over-step stair ascent. We compared self-selected stair ascent strategies between conventional and X2(®) prosthetic knees, examined between-limb differences, and differentiated stair ascent mechanics between X2(®) users and individuals without amputation. We also determined which factors are associated with differences in knee position during initial contact and swing within X2(®) users. Fourteen individuals with transfemoral amputation participated in stair ascent sessions while using conventional and X2(®) knees. Ten individuals without amputation also completed a stair ascent session. Lower-extremity stair ascent joint angles, moment, and powers and ground reaction forces were calculated using inverse dynamics during self-selected strategy and cadence and controlled cadence using a step-over-step strategy. One individual with amputation self-selected a step-over-step strategy while using a conventional knee, while 10 individuals self-selected a step-over-step strategy while using X2(®) knees. Individuals with amputation used greater prosthetic knee flexion during initial contact (32.5°, p = 0.003) and swing (68.2°, p = 0.001) with higher intersubject variability while using X2(®) knees compared to conventional knees (initial contact: 1.6°, swing: 6.2°). The increased prosthetic knee flexion while using X2(®) knees normalized knee kinematics to individuals without amputation during swing (88.4°, p = 0.179) but not during initial contact (65.7°, p = 0.002). Prosthetic knee flexion during initial contact and swing were positively correlated with prosthetic limb hip power during pull-up (r = 0.641, p = 0.046) and push-up/early swing (r = 0.993, p < 0.001), respectively. Participants with transfemoral amputation were more likely to self-select a step-over-step strategy similar to individuals without amputation while using X2(®) knees than conventional prostheses. Additionally, the increased prosthetic knee flexion used with X2(®) knees placed large power demands on the hip during pull-up and push-up/early swing. A modified strategy that uses less knee flexion can be used to allow step-over-step ascent in individuals with less hip strength.

  20. Improving arrival time identification in transient elastography

    NASA Astrophysics Data System (ADS)

    Klein, Jens; McLaughlin, Joyce; Renzi, Daniel

    2012-04-01

    In this paper, we improve the first step in the arrival time algorithm used for shear wave speed recovery in transient elastography. In transient elastography, a shear wave is initiated at the boundary and the interior displacement of the propagating shear wave is imaged with an ultrasound ultra-fast imaging system. The first step in the arrival time algorithm finds the arrival times of the shear wave by cross correlating displacement time traces (the time history of the displacement at a single point) with a reference time trace located near the shear wave source. The second step finds the shear wave speed from the arrival times. In performing the first step, we observe that the wave pulse decorrelates as it travels through the medium, which leads to inaccurate estimates of the arrival times and ultimately to blurring and artifacts in the shear wave speed image. In particular, wave ‘spreading’ accounts for much of this decorrelation. Here we remove most of the decorrelation by allowing the reference wave pulse to spread during the cross correlation. This dramatically improves the images obtained from arrival time identification. We illustrate the improvement of this method on phantom and in vivo data obtained from the laboratory of Mathias Fink at ESPCI, Paris.

  1. An explicit scheme for ohmic dissipation with smoothed particle magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Yusuke; Iwasaki, Kazunari; Inutsuka, Shu-ichiro

    2013-09-01

    In this paper, we present an explicit scheme for Ohmic dissipation with smoothed particle magnetohydrodynamics (SPMHD). We propose an SPH discretization of Ohmic dissipation and solve Ohmic dissipation part of induction equation with the super-time-stepping method (STS) which allows us to take a longer time step than Courant-Friedrich-Levy stability condition. Our scheme is second-order accurate in space and first-order accurate in time. Our numerical experiments show that optimal choice of the parameters of STS for Ohmic dissipation of SPMHD is νsts ˜ 0.01 and Nsts ˜ 5.

  2. Anatomy of a Satellite System: Wauwatosa Lunch Program

    ERIC Educational Resources Information Center

    Modern Schools, 1973

    1973-01-01

    The consolidation of the key electric processing equipment from six kitchens into one central facility serving 14 schools has proven a successful step. It cuts down on time, labor, and costs, and at the same time allows for greater control throughout the entire system. (Author)

  3. Single-crossover recombination in discrete time.

    PubMed

    von Wangenheim, Ute; Baake, Ellen; Baake, Michael

    2010-05-01

    Modelling the process of recombination leads to a large coupled nonlinear dynamical system. Here, we consider a particular case of recombination in discrete time, allowing only for single crossovers. While the analogous dynamics in continuous time admits a closed solution (Baake and Baake in Can J Math 55:3-41, 2003), this no longer works for discrete time. A more general model (i.e. without the restriction to single crossovers) has been studied before (Bennett in Ann Hum Genet 18:311-317, 1954; Dawson in Theor Popul Biol 58:1-20, 2000; Linear Algebra Appl 348:115-137, 2002) and was solved algorithmically by means of Haldane linearisation. Using the special formalism introduced by Baake and Baake (Can J Math 55:3-41, 2003), we obtain further insight into the single-crossover dynamics and the particular difficulties that arise in discrete time. We then transform the equations to a solvable system in a two-step procedure: linearisation followed by diagonalisation. Still, the coefficients of the second step must be determined in a recursive manner, but once this is done for a given system, they allow for an explicit solution valid for all times.

  4. Automating the evaluation of flood damages: methodology and potential gains

    NASA Astrophysics Data System (ADS)

    Eleutério, Julian; Martinez, Edgar Daniel

    2010-05-01

    The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.

  5. Co-evolving Physical and Biological Organization in Step-pool Channels: Experiments from a Restoration Reach on Wildcat Creek, California

    NASA Astrophysics Data System (ADS)

    Chin, A.; O'Dowd, A. P.; Mendez, P. K.; Velasco, K. Z.; Leventhal, R. D.; Storesund, R.; Laurencio, L. R.

    2014-12-01

    Step-pools are important features in fluvial systems. Through energy dissipation, step-pools provide stability in high-energy environments that otherwise may erode and degrade. Although research has focused on geomorphological aspects of step-pool channels, the ecological significance of step-pool streams is increasingly recognized. Step-pool streams often contain higher density and diversity of benthic macroinvertebrates and are critical habitats for organisms such as salmonids and tailed frogs. Step-pools are therefore increasingly used to restore eroding channels and improve ecological conditions. This paper addresses a restoration reach of Wildcat Creek in Berkeley, California that featured an installation of step-pools in 2012. The design framework recognized step-pool formation as a self-organizing process that produces a rhythmic morphology. After placing step particles at locations where step-pools are expected to form according to hydraulic theory, the self-organizing approach allowed fluvial processes to refine the rocks into adjusted sequences over time. In addition, a 30-meter "experimental" reach was created to explore the co-evolution of geomorphological and ecological characteristics. After constructing a plane bed channel, boulders and cobbles piled at the upstream end allowed natural flows to mobilize and sort them into step-pool sequences. Ground surveys and LiDAR recorded the development of step-pool sequences over several seasons. Concurrent sampling of benthic macroinvertebrates documented the formation of biological communities in conjunction with habitat. Biological sampling in an upstream reference reach provided a comparison with the restored reach over time. Results to date show an emergent step-pool channel with steps that segment the plane bed into initial step and pool habitats. Biological communities are beginning to form, showing more distinction among habitat types during some seasons, although they do not yet approach reference values at this stage of development. Research over longer timeframes is needed to reveal how biological and physical characteristics may co-organize toward an equilibrium landscape. Such integrated understanding will assist development of innovative restoration designs.

  6. Efficient Control Law Simulation for Multiple Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.

    1998-10-06

    In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less

  7. Microgravity

    NASA Image and Video Library

    2004-04-15

    STEP will carry concentric test masses to Earth orbit to test a fundamental assumption underlying Einstein's theory of general relativity: that gravitational mass is equivalent to inertial mass. STEP is a 21st-century version of the test that Galileo is said to have performed by dropping a carnon ball and a musket ball simultaneously from the top of the Leaning Tower of Pisa to compare their accelerations. During the STEP experiment, four pairs of test masses will be falling around the Earth, and their accelerations will be measured by superconducting quantum interference devices (SQUIDS). The extended time sensitivity of the instruments will allow the measurements to be a million times more accurate than those made in modern ground-based tests.

  8. Nonlinear Multiscale Transformations: From Synchronization to Error Control

    DTIC Science & Technology

    2001-07-01

    transformation (plus the quantization step) has taken place, a lossless Lempel - Ziv compression algorithm is applied to reduce the size of the transformed... compressed data are all very close, however the visual quality of the reconstructed image is significantly better for the EC compression algorithm ...used in recent times in the first step of transform coding algorithms for image compression . Ideally, a multiscale transformation allows for an

  9. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  10. Time-asymptotic solutions of the Navier-Stokes equation for free shear flows using an alternating-direction implicit method

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Morris, D. J.

    1976-01-01

    An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.

  11. Convergence speeding up in the calculation of the viscous flow about an airfoil

    NASA Technical Reports Server (NTRS)

    Radespiel, R.; Rossow, C.

    1988-01-01

    A finite volume method to solve the three dimensional Navier-Stokes equations was developed. It is based on a cell-vertex scheme with central differences and explicit Runge-Kutta time steps. A good convergence for a stationary solution was obtained by the use of local time steps, implicit smoothing of the residues, a multigrid algorithm, and a carefully controlled artificial dissipative term. The method is illustrated by results for transonic profiles and airfoils. The method allows a routine solution of the Navier-Stokes equations.

  12. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.

    Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.

  13. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    DOE PAGES

    Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.

    2017-10-12

    Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.

  14. Generalized Green's function molecular dynamics for canonical ensemble simulations

    NASA Astrophysics Data System (ADS)

    Coluci, V. R.; Dantas, S. O.; Tewary, V. K.

    2018-05-01

    The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.

  15. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    NASA Astrophysics Data System (ADS)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  16. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  17. One Step Quantum Key Distribution Based on EPR Entanglement.

    PubMed

    Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao

    2016-06-30

    A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper's attack would introduce at least an error rate of 46.875%. Compared with the "Ping-pong" protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step.

  18. The Faulkes Telescope Project at school

    NASA Astrophysics Data System (ADS)

    Neta, Miguel

    2014-05-01

    The Faulkes Telescope Project [1] was started in 2000 and is currently managed by the Las Cumbres Observatory Global Telescope Network (LCOGT) [2]. Allows student access to two remote telescopes (in Hawaii and in Australia), allowing you to capture images of the sky. Since January 2012 I conduct monthly observations with students: first with students from Escola Secundária de Loulé (ESL) [3] and starting from September 2013 with students from Agrupamento de Escolas Dra Laura Ayres [4], in Quarteira. Each session is previously prepared in order to make the best of the time available. For that we use a virtual planetarium that allows us to see the sky in place and time of the scheduled session. After the start of each session a student takes control in real time of one of the telescopes from a computer connected to the internet. This project is a tool that gives the students the feeling of doing science and meet the sky step by step. The observations made by my students can be found at www.miguelneta.pt/faulkestelescope. [1] http://www.faulkes-telescope.com [2] http://lcogt.net [3] https://www.es-loule.edu.pt [4] http://www.esla.edu.pt

  19. Error correction in short time steps during the application of quantum gates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castro, L.A. de, E-mail: leonardo.castro@usp.br; Napolitano, R.D.J.

    2016-04-15

    We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for themore » cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.« less

  20. Asynchronous collision integrators: Explicit treatment of unilateral contact with friction and nodal restraints

    PubMed Central

    Wolff, Sebastian; Bucher, Christian

    2013-01-01

    This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806

  1. Dissemination of radiological information using enhanced podcasts.

    PubMed

    Thapa, Mahesh M; Richardson, Michael L

    2010-03-01

    Podcasts and vodcasts (video podcasts) have become popular means of sharing educational information via the Internet. In this article, we introduce another method, an enhanced podcast, which allows images to be displayed with the audio. Bookmarks and URLs may also be imbedded within the presentation. This article describes a step-by-step tutorial for recording and distributing an enhanced podcast using the Macintosh operating system. Enhanced podcasts can also be created on the Windows platform using other software. An example of an enhanced podcast and a demonstration video of all the steps described in this article are available online at web.mac.com/mthapa. An enhanced podcast is an effective method of delivering radiological information via the Internet. Viewing images while simultaneously listening to audio content allows the user to have a richer experience than with a simple podcast. Incorporation of bookmarks and URLs within the presentation will make learning more efficient and interactive. The use of still images rather than video clips equates to a much smaller file size for an enhanced podcast compared to a vodcast, allowing quicker upload and download times.

  2. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  3. One step screening of retroviral producer clones by real time quantitative PCR.

    PubMed

    Towers, G J; Stockholm, D; Labrousse-Najburg, V; Carlier, F; Danos, O; Pagès, J C

    1999-01-01

    Recombinant retroviruses are obtained from either stably or transiently transfected retrovirus producer cells. In the case of stably producing lines, a large number of clones must be screened in order to select the one with the highest titre. The multi-step selection of high titre producing clones is time consuming and expensive. We have taken advantage of retroviral endogenous reverse transcription to develop a quantitative PCR assay on crude supernatant from producing clones. We used Taqman PCR technology, which, by using fluorescence measurement at each cycle of amplification, allows PCR product quantification. Fluorescence results from specific degradation of a probe oligonucleotide by the Taq polymerase 3'-5' exonuclease activity. Primers and probe sequences were chosen to anneal to the viral strong stop species, which is the first DNA molecule synthesised during reverse transcription. The protocol consists of a single real time PCR, using as template filtered viral supernatant without any other pre-treatment. We show that the primers and probe described allow quantitation of serially diluted plasmid to as few as 15 plasmid molecules. We then test 200 GFP-expressing retroviral-producing clones either by FACS analysis of infected cells or by using the quantitative PCR. We confirm that the Taqman protocol allows the detection of virus in supernatant and selection of high titre clones. Furthermore, we can determine infectious titre by quantitative PCR on genomic DNA from infected cells, using an additional set of primers and probe to albumin to normalise for the genomic copy number. We demonstrate that real time quantitative PCR can be used as a powerful and reliable single step, high throughput screen for high titre retroviral producer clones.

  4. Time to Translate: Deciphering the Codon in the Classroom

    ERIC Educational Resources Information Center

    Firooznia, Fardad

    2015-01-01

    I describe and evaluate a fun and simple role-playing exercise that allows students to actively work through the process of translation. This exercise can easily be completed during a 50-minute class period, with time to review the steps and contemplate complications such as the effects of various types of mutations.

  5. Nanostructuring of sapphire using time-modulated nanosecond laser pulses

    NASA Astrophysics Data System (ADS)

    Lorenz, P.; Zagoranskiy, I.; Ehrhardt, M.; Bayer, L.; Zimmer, K.

    2017-02-01

    The nanostructuring of dielectric surfaces using laser radiation is still a challenge. The IPSM-LIFE (laser-induced front side etching using in-situ pre-structured metal layer) method allows the easy, large area and fast laser nanostructuring of dielectrics. At IPSM-LIFE a metal covered dielectric is irradiated where the structuring is assisted by a self-organized molten metal layer deformation process. The IPSM-LIFE can be divided into two steps: STEP 1: The irradiation of thin metal layers on dielectric surfaces results in a melting and nanostructuring process of the metal layer and partially of the dielectric surface. STEP 2: A subsequent high laser fluence treatment of the metal nanostructures result in a structuring of the dielectric surface. At this study a sapphire substrate Al2O3(1-102) was covered with a 10 nm thin molybdenum layer and irradiated by an infrared laser with an adjustable time-dependent pulse form with a time resolution of 1 ns (wavelength λ = 1064 nm, pulse duration Δtp = 1 - 600 ns, Gaussian beam profile). The laser treatment allows the fabrication of different surface structures into the sapphire surface due to a pattern transfer process. The resultant structures were investigated by scanning electron microscopy (SEM). The process was simulated and the simulation results were compared with experimental results.

  6. Accelerated simulations of aromatic polymers: application to polyether ether ketone (PEEK)

    NASA Astrophysics Data System (ADS)

    Broadbent, Richard J.; Spencer, James S.; Mostofi, Arash A.; Sutton, Adrian P.

    2014-10-01

    For aromatic polymers, the out-of-plane oscillations of aromatic groups limit the maximum accessible time step in a molecular dynamics simulation. We present a systematic approach to removing such high-frequency oscillations from planar groups along aromatic polymer backbones, while preserving the dynamical properties of the system. We consider, as an example, the industrially important polymer, polyether ether ketone (PEEK), and show that this coarse graining technique maintains excellent agreement with the fully flexible all-atom and all-atom rigid bond models whilst allowing the time step to increase fivefold to 5 fs.

  7. A Symmetric Positive Definite Formulation for Monolithic Fluid Structure Interaction

    DTIC Science & Technology

    2010-08-09

    more likely to converge than simply iterating the partitioned approach to convergence in a simple Gauss - Seidel manner. Our approach allows the use of...conditions in a second step. These approaches can also be iterated within a given time step for increased stability, noting that in the limit if one... converges one obtains a monolithic (albeit expensive) approach. Other approaches construct strongly coupled systems and then solve them in one of several

  8. Optical monitor for real time thickness change measurements via lateral-translation induced phase-stepping interferometry

    DOEpatents

    Rushford, Michael C.

    2002-01-01

    An optical monitoring instrument monitors etch depth and etch rate for controlling a wet-etching process. The instrument provides means for viewing through the back side of a thick optic onto a nearly index-matched interface. Optical baffling and the application of a photoresist mask minimize spurious reflections to allow for monitoring with extremely weak signals. A Wollaston prism enables linear translation for phase stepping.

  9. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  10. Calculating Time-Integral Quantities in Depletion Calculations

    DOE PAGES

    Isotalo, Aarno

    2016-06-02

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  11. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on-line. The optimal avoidance trajectory is implemented as a receding-horizon model predictive control law. Therefore, at each time step, the optimal avoidance trajectory is found and the first time step of its acceleration is applied. At the next time step of the control computer, the problem is re-solved and the new first time step is again applied. This continual updating allows the RCA algorithm to adapt to a colliding spacecraft that is making erratic course changes.

  12. One Step Quantum Key Distribution Based on EPR Entanglement

    PubMed Central

    Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao

    2016-01-01

    A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper’s attack would introduce at least an error rate of 46.875%. Compared with the “Ping-pong” protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step. PMID:27357865

  13. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Modifications to WRF's dynamical core to improve the treatment of moisture for large-eddy simulations: WRF DY-CORE MOISTURE TREATMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Heng; Endo, Satoshi; Wong, May

    Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in the acoustic sub­stepping portionmore » of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub­steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub­steps) are eliminated in both of the example stratocumulus cases. This modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  15. Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations

    DOE PAGES

    Xiao, Heng; Endo, Satoshi; Wong, May; ...

    2015-10-29

    Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  16. A framework for simultaneous aerodynamic design optimization in the presence of chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi

    Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less

  17. Mitigation of narrowband interferences by means of a reconfigurable stepped frequency GPR system

    NASA Astrophysics Data System (ADS)

    Persico, Raffaele; Dei, Devis; Parrini, Filippo; Matera, Loredana

    2016-08-01

    This paper proposes a new technique for the mitigation of narrowband interferences by making use of an innovative stepped frequency Ground Penetrating Radar (GPR) system, based on the modulation of the integration time of the harmonic components of the signal. This can allow a good rejection of the interference signal without filtering out part of the band of the useful signal (which would involve a loss of information) and without increasing the power of the transmitted signal (which might saturate the receiver and make illegal the level of transmitted power). The price paid for this is an extension of the time needed in order to perform the measurements. We will show that this necessary drawback can be contained by making use of a prototypal reconfigurable stepped frequency GPR system.

  18. Efficiency improvement in proton dose calculations with an equivalent restricted stopping power formalism

    NASA Astrophysics Data System (ADS)

    Maneval, Daniel; Bouchard, Hugo; Ozell, Benoît; Després, Philippe

    2018-01-01

    The equivalent restricted stopping power formalism is introduced for proton mean energy loss calculations under the continuous slowing down approximation. The objective is the acceleration of Monte Carlo dose calculations by allowing larger steps while preserving accuracy. The fractional energy loss per step length ɛ was obtained with a secant method and a Gauss-Kronrod quadrature estimation of the integral equation relating the mean energy loss to the step length. The midpoint rule of the Newton-Cotes formulae was then used to solve this equation, allowing the creation of a lookup table linking ɛ to the equivalent restricted stopping power L eq, used here as a key physical quantity. The mean energy loss for any step length was simply defined as the product of the step length with L eq. Proton inelastic collisions with electrons were added to GPUMCD, a GPU-based Monte Carlo dose calculation code. The proton continuous slowing-down was modelled with the L eq formalism. GPUMCD was compared to Geant4 in a validation study where ionization processes alone were activated and a voxelized geometry was used. The energy straggling was first switched off to validate the L eq formalism alone. Dose differences between Geant4 and GPUMCD were smaller than 0.31% for the L eq formalism. The mean error and the standard deviation were below 0.035% and 0.038% respectively. 99.4 to 100% of GPUMCD dose points were consistent with a 0.3% dose tolerance. GPUMCD 80% falloff positions (R80 ) matched Geant’s R80 within 1 μm. With the energy straggling, dose differences were below 2.7% in the Bragg peak falloff and smaller than 0.83% elsewhere. The R80 positions matched within 100 μm. The overall computation times to transport one million protons with GPUMCD were 31-173 ms. Under similar conditions, Geant4 computation times were 1.4-20 h. The L eq formalism led to an intrinsic efficiency gain factor ranging between 30-630, increasing with the prescribed accuracy of simulations. The L eq formalism allows larger steps leading to a O(constant) algorithmic time complexity. It significantly accelerates Monte Carlo proton transport while preserving accuracy. It therefore constitutes a promising variance reduction technique for computing proton dose distributions in a clinical context.

  19. 76 FR 4997 - Medicare Program; Inpatient Psychiatric Facilities Prospective Payment System-Update for Rate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-27

    .... Repeating this step for other periods produces a series of market basket levels over time. Dividing an index..., P.O. Box 8010, Baltimore, MD 21244-1850. Please allow sufficient time for mailed comments to be...); interrupted stays; and a per treatment adjustment for patients who undergo ECT. A complete discussion of the...

  20. General methods for analysis of sequential "n-step" kinetic mechanisms: application to single turnover kinetics of helicase-catalyzed DNA unwinding.

    PubMed

    Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M

    2003-10-01

    Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.

  1. Non-equilibrium calculations of atmospheric processes initiated by electron impact.

    NASA Astrophysics Data System (ADS)

    Campbell, L.; Brunger, M. J.

    2007-05-01

    Electron impact in the atmosphere produces ionisation, dissociation, electronic excitation and vibrational excitation of atoms and molecules. The products can then take part in chemical reactions, recombination with electrons, or radiative or collisional deactivation. While most such processes are fast, some longer--lived species do not reach equilibrium. The electron source (photoelectrons or auroral electrons) also varies over time and longer-lived species can move substantially in altitude by molecular, ambipolar or eddy diffusion. Hence non-equilibrium calculations are required in some circumstances. Such time-step calculations need to have sufficiently short steps so that the fastest processes are still calculated correctly, but this can lead to computation times that are too large. Hence techniques to allow for longer time steps by incorporating equilibrium calculations are described. Examples are given for results of atmospheric non-equilibrium calculations, including the populations of the vibrational levels of ground state N2, the electron density and its dependence on vibrationally excited N2, predictions of nitric oxide density, and detailed processes during short duration auroral events.

  2. MultiDrizzle: An Integrated Pyraf Script for Registering, Cleaning and Combining Images

    NASA Astrophysics Data System (ADS)

    Koekemoer, A. M.; Fruchter, A. S.; Hook, R. N.; Hack, W.

    We present the new PyRAF-based `MultiDrizzle' script, which is aimed at providing a one-step approach to combining dithered HST images. The purpose of this script is to allow easy interaction with the complex suite of tasks in the IRAF/STSDAS `dither' package, as well as the new `PyDrizzle' task, while at the same time retaining the flexibility of these tasks through a number of parameters. These parameters control the various individual steps, such as sky subtraction, image registration, `drizzling' onto separate output images, creation of a clean median image, transformation of the median with `blot' and creation of cosmic ray masks, as well as the final image combination step using `drizzle'. The default parameters of all the steps are set so that the task will work automatically for a wide variety of different types of images, while at the same time allowing adjustment of individual parameters for special cases. The script currently works for both ACS and WFPC2 data, and is now being tested on STIS and NICMOS images. We describe the operation of the script and the effect of various parameters, particularly in the context of combining images from dithered observations using ACS and WFPC2. Additional information is also available at the `MultiDrizzle' home page: http://www.stsci.edu/~koekemoe/multidrizzle/

  3. Adaptive time steps in trajectory surface hopping simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spörkel, Lasse, E-mail: spoerkel@kofo.mpg.de; Thiel, Walter, E-mail: thiel@kofo.mpg.de

    2016-05-21

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energymore » surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.« less

  4. Adaptive time steps in trajectory surface hopping simulations

    NASA Astrophysics Data System (ADS)

    Spörkel, Lasse; Thiel, Walter

    2016-05-01

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.

  5. Finite-difference fluid dynamics computer mathematical models for the design and interpretation of experiments for space flight. [atmospheric general circulation experiment, convection in a float zone, and the Bridgman-Stockbarger crystal growing system

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.; Fowlis, W. W.; Miller, T. L.

    1984-01-01

    Numerical methods are used to design a spherical baroclinic flow model experiment of the large scale atmosphere flow for Spacelab. The dielectric simulation of radial gravity is only dominant in a low gravity environment. Computer codes are developed to study the processes at work in crystal growing systems which are also candidates for space flight. Crystalline materials rarely achieve their potential properties because of imperfections and component concentration variations. Thermosolutal convection in the liquid melt can be the cause of these imperfections. Such convection is suppressed in a low gravity environment. Two and three dimensional finite difference codes are being used for this work. Nonuniform meshes and implicit iterative methods are used. The iterative method for steady solutions is based on time stepping but has the options of different time steps for velocity and temperature and of a time step varying smoothly with position according to specified powers of the mesh spacings. This allows for more rapid convergence. The code being developed for the crystal growth studies allows for growth of the crystal as the solid-liquid interface. The moving interface is followed using finite differences; shape variations are permitted. For convenience in applying finite differences in the solid and liquid, a time dependent coordinate transformation is used to make this interface a coordinate surface.

  6. Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials

    PubMed Central

    Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla

    2016-01-01

    Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897

  7. 76 FR 78702 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-19

    ... p.m., Eastern Time. See the Procedural Schedule in the SUPPLEMENTARY INFORMATION section for other... the closing of the Sherwood, Michigan post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners...

  8. Real time en face Fourier-domain optical coherence tomography with direct hardware frequency demodulation

    PubMed Central

    Biedermann, Benjamin R.; Wieser, Wolfgang; Eigenwillig, Christoph M.; Palte, Gesa; Adler, Desmond C.; Srinivasan, Vivek J.; Fujimoto, James G.; Huber, Robert

    2009-01-01

    We demonstrate en face swept source optical coherence tomography (ss-OCT) without requiring a Fourier transformation step. The electronic optical coherence tomography (OCT) interference signal from a k-space linear Fourier domain mode-locked laser is mixed with an adjustable local oscillator, yielding the analytic reflectance signal from one image depth for each frequency sweep of the laser. Furthermore, a method for arbitrarily shaping the spectral intensity profile of the laser is presented, without requiring the step of numerical apodization. In combination, these two techniques enable sampling of the in-phase and quadrature signal with a slow analog-to-digital converter and allow for real-time display of en face projections even for highest axial scan rates. Image data generated with this technique is compared to en face images extracted from a three-dimensional OCT data set. This technique can allow for real-time visualization of arbitrarily oriented en face planes for the purpose of alignment, registration, or operator-guided survey scans while simultaneously maintaining the full capability of high-speed volumetric ss-OCT functionality. PMID:18978919

  9. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  10. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  11. Application of Time-Frequency Representations To Non-Stationary Radar Cross Section

    DTIC Science & Technology

    2009-03-01

    The three- dimensional plot produced by a TFR allows one to determine which spectral components of a signal vary with time [25... a range bin ( of width cT 2 ) from the stepped frequency waveform. 2. Cancel the clutter (stationary components) by zeroing out points associated with ...generating an infinite number of bilinear Time Frequency distributions based on a generalized equation and a change- able

  12. Gap Year: Time off, with a Plan

    ERIC Educational Resources Information Center

    Torpey, Elka Maria

    2009-01-01

    A gap year allows people to step off the usual educational or career path and reassess their future. According to people who have taken a gap year, the time away can be well worth it. This article can help a person decide whether to take a gap year and how to make the most of his time off. It describes what a gap year is, including its pros and…

  13. Evidence-based practice, step by step: critical appraisal of the evidence: part II: digging deeper--examining the "keeper" studies.

    PubMed

    Fineout-Overholt, Ellen; Melnyk, Bernadette Mazurek; Stillwell, Susan B; Williamson, Kathleen M

    2010-09-01

    This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we've scheduled "Chat with the Authors" calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be published with November's Evidence-Based Practice, Step by Step.

  14. The GEMPAK Barnes objective analysis scheme

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Desjardins, M.; Kocin, P. J.

    1981-01-01

    GEMPAK, an interactive computer software system developed for the purpose of assimilating, analyzing, and displaying various conventional and satellite meteorological data types is discussed. The objective map analysis scheme possesses certain characteristics that allowed it to be adapted to meet the analysis needs GEMPAK. Those characteristics and the specific adaptation of the scheme to GEMPAK are described. A step-by-step guide for using the GEMPAK Barnes scheme on an interactive computer (in real time) to analyze various types of meteorological datasets is also presented.

  15. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  16. Quick foot placement adjustments during gait are less accurate in individuals with focal cerebellar lesions.

    PubMed

    Hoogkamer, Wouter; Potocanac, Zrinka; Van Calenbergh, Frank; Duysens, Jacques

    2017-10-01

    Online gait corrections are frequently used to restore gait stability and prevent falling. They require shorter response times than voluntary movements which suggests that subcortical pathways contribute to the execution of online gait corrections. To evaluate the potential role of the cerebellum in these pathways we tested the hypotheses that online gait corrections would be less accurate in individuals with focal cerebellar damage than in neurologically intact controls and that this difference would be more pronounced for shorter available response times and for short step gait corrections. We projected virtual stepping stones on an instrumented treadmill while some of the approaching stepping stones were shifted forward or backward, requiring participants to adjust their foot placement. Varying the timing of those shifts allowed us to address the effect of available response time on foot placement error. In agreement with our hypothesis, individuals with focal cerebellar lesions were less accurate in adjusting their foot placement in reaction to suddenly shifted stepping stones than neurologically intact controls. However, the cerebellar lesion group's foot placement error did not increase more with decreasing available response distance or for short step versus long step adjustments compared to the control group. Furthermore, foot placement error for the non-shifting stepping stones was also larger in the cerebellar lesion group as compared to the control group. Consequently, the reduced ability to accurately adjust foot placement during walking in individuals with focal cerebellar lesions appears to be a general movement control deficit, which could contribute to increased fall risk. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Coherent diffractive imaging of time-evolving samples with improved temporal resolution

    DOE PAGES

    Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...

    2016-05-19

    Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less

  18. A Semi-implicit Method for Resolution of Acoustic Waves in Low Mach Number Flows

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2002-09-01

    A semi-implicit numerical method for time accurate simulation of compressible flow is presented. By extending the low Mach number pressure correction method, a Helmholtz equation for pressure is obtained in the case of compressible flow. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity, resulting in significant efficiency gains. Use of a discretization that is centered in both time and space results in zero artificial damping of acoustic waves. The method is attractive for problems in which Mach numbers are low, and the acoustic waves of most interest are those having low frequency, such as acoustic combustion instabilities. Both of these characteristics suggest the use of time steps larger than those allowable by an acoustic CFL limitation. In some cases it may be desirable to include a small amount of numerical dissipation to eliminate oscillations due to small-wavelength, high-frequency, acoustic modes, which are not of interest; therefore, a provision for doing this in a controlled manner is included in the method. Results of the method for several model problems are presented, and the performance of the method in a large eddy simulation is examined.

  19. Aqueous solvation from the water perspective.

    PubMed

    Ahmed, Saima; Pasti, Andrea; Fernández-Terán, Ricardo J; Ciardi, Gustavo; Shalit, Andrey; Hamm, Peter

    2018-06-21

    The response of water re-solvating a charge-transfer dye (deprotonated Coumarin 343) after photoexcitation has been measured by means of transient THz spectroscopy. Two steps of increasing THz absorption are observed, a first ∼10 ps step on the time scale of Debye relaxation of bulk water and a much slower step on a 3.9 ns time scale, the latter of which reflecting heating of the bulk solution upon electronic relaxation of the dye molecules from the S 1 back into the S 0 state. As an additional reference experiment, the hydroxyl vibration of water has been excited directly by a short IR pulse, establishing that the THz signal measures an elevated temperature within ∼1 ps. This result shows that the first step upon dye excitation (10 ps) is not limited by the response time of the THz signal; it rather reflects the reorientation of water molecules in the solvation layer. The apparent discrepancy between the relatively slow reorientation time and the general notion that water is among the fastest solvents with a solvation time in the sub-picosecond regime is discussed. Furthermore, non-equilibrium molecular dynamics simulations have been performed, revealing a close-to-quantitative agreement with experiment, which allows one to disentangle the contribution of heating to the overall THz response from that of water orientation.

  20. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  1. Evaluation of a continuous-rotation, high-speed scanning protocol for micro-computed tomography.

    PubMed

    Kerl, Hans Ulrich; Isaza, Cristina T; Boll, Hanne; Schambach, Sebastian J; Nolte, Ingo S; Groden, Christoph; Brockmann, Marc A

    2011-01-01

    Micro-computed tomography is used frequently in preclinical in vivo research. Limiting factors are radiation dose and long scan times. The purpose of the study was to compare a standard step-and-shoot to a continuous-rotation, high-speed scanning protocol. Micro-computed tomography of a lead grid phantom and a rat femur was performed using a step-and-shoot and a continuous-rotation protocol. Detail discriminability and image quality were assessed by 3 radiologists. The signal-to-noise ratio and the modulation transfer function were calculated, and volumetric analyses of the femur were performed. The radiation dose of the scan protocols was measured using thermoluminescence dosimeters. The 40-second continuous-rotation protocol allowed a detail discriminability comparable to the step-and-shoot protocol at significantly lower radiation doses. No marked differences in volumetric or qualitative analyses were observed. Continuous-rotation micro-computed tomography significantly reduces scanning time and radiation dose without relevantly reducing image quality compared with a normal step-and-shoot protocol.

  2. Phase Shift Interferometer and Growth Set Up to Step Pattern Formation During Growth From Solutions. Influence of the Oscillatory solution Flow on Stability

    NASA Technical Reports Server (NTRS)

    Chernov, Alex A.; Booth, N. A.; Vekilov, P. G.; Murray, B. T.; McFadden, G. B.

    2000-01-01

    We have assembled an experimental setup based on Michelson interferometry with the growing crystal surface as one of the reflective surfaces. The crystallization part of the device allows optical monitoring of a face of a crystal growing at temperature stable within 0.05 C in a flow of solution of controlled direction and speed. The reference arm of the interferometer contains a liquid crystal element that allows controlled shifts of the phase of the interferograms. We employ an image-processing algorithm, which combines five images with a pi/2 phase difference between each pair of images. The images are transferred to a computer by a camera capable of capturing 60 frames per second. The device allows data collection on surface morphology and kinetics during the face layers growth over a relatively large area (approximately 4 sq. mm) in situ and in real time during growth. The estimated depth resolution of the phase shifting interferometry is approximately 50 Angstroms. The data will be analyzed in order to reveal and monitor step bunching during the growth process. The crystal chosen as a model for study in this work is KH2PO4 (KDP). This optically non-linear material is widely used in frequency doubling applications. There have been a number of studies of the kinetics of KDP crystallization that can serve as a benchmark for our investigations. However, so far, systematic quantitative characteristics of step interaction and bunching are missing. We intend to present our first quantitative results on the onset, initial stages and development of instabilities in moving step trains on vicinal crystal surfaces at varying supersaturation, flow rate, and flow direction. Behavior of a vicinal face growing from solution flowing normal to the steps and periodically changing its direction in time was considered theoretically. It was found that this oscillating flow reduces both stabilization and destabilization effects resulted from the unidirectional solution flow directed up the step stream and down the step stream. This reduction of stabilization and destabilization comes from effective mixing which entangles the phase shifts between the spatially periodic interface perturbation and the concentration wave induced by this perturbation. Numerical results and simplified mixing criterion will be discussed.

  3. Wafer-scale Thermodynamically Stable GaN Nanorods via Two-Step Self-Limiting Epitaxy for Optoelectronic Applications

    NASA Astrophysics Data System (ADS)

    Kum, Hyun; Seong, Han-Kyu; Lim, Wantae; Chun, Daemyung; Kim, Young-Il; Park, Youngsoo; Yoo, Geonwook

    2017-01-01

    We present a method of epitaxially growing thermodynamically stable gallium nitride (GaN) nanorods via metal-organic chemical vapor deposition (MOCVD) by invoking a two-step self-limited growth (TSSLG) mechanism. This allows for growth of nanorods with excellent geometrical uniformity with no visible extended defects over a 100 mm sapphire (Al2O3) wafer. An ex-situ study of the growth morphology as a function of growth time for the two self-limiting steps elucidate the growth dynamics, which show that formation of an Ehrlich-Schwoebel barrier and preferential growth in the c-plane direction governs the growth process. This process allows monolithic formation of dimensionally uniform nanowires on templates with varying filling matrix patterns for a variety of novel electronic and optoelectronic applications. A color tunable phosphor-free white light LED with a coaxial architecture is fabricated as a demonstration of the applicability of these nanorods grown by TSSLG.

  4. Detection and Characterization of Viral Species/Subspecies Using Isothermal Recombinase Polymerase Amplification (RPA) Assays.

    PubMed

    Glais, Laurent; Jacquot, Emmanuel

    2015-01-01

    Numerous molecular-based detection protocols include an amplification step of the targeted nucleic acids. This step is important to reach the expected sensitive detection of pathogens in diagnostic procedures. Amplifications of nucleic acid sequences are generally performed, in the presence of appropriate primers, using thermocyclers. However, the time requested to amplify molecular targets and the cost of the thermocycler machines could impair the use of these methods in routine diagnostics. Recombinase polymerase amplification (RPA) technique allows rapid (short-term incubation of sample and primers in an enzymatic mixture) and simple (isothermal) amplification of molecular targets. RPA protocol requires only basic molecular steps such as extraction procedures and agarose gel electrophoresis. Thus, RPA can be considered as an interesting alternative to standard molecular-based diagnostic tools. In this paper, the complete procedures to set up an RPA assay, applied to detection of RNA (Potato virus Y, Potyvirus) and DNA (Wheat dwarf virus, Mastrevirus) viruses, are described. The proposed procedure allows developing species- or subspecies-specific detection assay.

  5. 76 FR 68232 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-03

    ... the closing of the Ogden, Arkansas post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... Service); November 22, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the...

  6. 76 FR 76449 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-07

    ... steps and provides a procedural schedule. Publication of this document will allow the Postal Service... Postal Service); December 27, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the Procedural Schedule in the SUPPLEMENTARY INFORMATION section for other dates of interest. ADDRESSES: Submit...

  7. 76 FR 72456 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... the closing of the Scottville, Illinois post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service... Postal Service); December 12, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the...

  8. 76 FR 76452 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-07

    ... steps and provides a procedural schedule. Publication of this document will allow the Postal Service... Postal Service); December 27, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the Procedural Schedule in the SUPPLEMENTARY INFORMATION section for other dates of interest. ADDRESSES: Submit...

  9. 76 FR 76447 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-07

    ... the closing of the Miller, Nebraska post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... Service); December 27, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the...

  10. 76 FR 68235 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-03

    ... steps and provides a procedural schedule. Publication of this document will allow the Postal Service... Postal Service); November 22, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the Procedural Schedule in the SUPPLEMENTARY INFORMATION section for other dates of interest. ADDRESSES: Submit...

  11. 76 FR 78953 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-20

    ... p.m., Eastern Time. See the Procedural Schedule in the SUPPLEMENTARY INFORMATION section for other... the closing of the Mount Union, Iowa post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners...

  12. 76 FR 72457 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... steps and provides a procedural schedule. Publication of this document will allow the Postal Service... Postal Service); December 12, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the Procedural Schedule in the SUPPLEMENTARY INFORMATION section for other dates of interest. ADDRESSES: Submit...

  13. 76 FR 68231 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-03

    ... the closing of the Ferguson, Iowa post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... Service); November 21, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the...

  14. 76 FR 68234 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-03

    ... steps and provides a procedural schedule. Publication of this document will allow the Postal Service... Postal Service); November 22, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the Procedural Schedule in the SUPPLEMENTARY INFORMATION section for other dates of interest. ADDRESSES: Submit...

  15. Why Not Wait? Eight Institutions Share Their Experiences Moving United States Medical Licensing Examination Step 1 After Core Clinical Clerkships.

    PubMed

    Daniel, Michelle; Fleming, Amy; Grochowski, Colleen O'Conner; Harnik, Vicky; Klimstra, Sibel; Morrison, Gail; Pock, Arnyce; Schwartz, Michael L; Santen, Sally

    2017-11-01

    The majority of medical students complete the United States Medical Licensing Examination Step 1 after their foundational sciences; however, there are compelling reasons to examine this practice. This article provides the perspectives of eight MD-granting medical schools that have moved Step 1 after the core clerkships, describing their rationale, logistics of the change, outcomes, and lessons learned. The primary reasons these institutions cite for moving Step 1 after clerkships are to foster more enduring and integrated basic science learning connected to clinical care and to better prepare students for the increasingly clinical focus of Step 1. Each school provides key features of the preclerkship and clinical curricula and details concerning taking Steps 1 and 2, to allow other schools contemplating change to understand the landscape. Most schools report an increase in aggregate Step 1 scores after the change. Despite early positive outcomes, there may be unintended consequences to later scheduling of Step 1, including relatively late student reevaluations of their career choice if Step 1 scores are not competitive in the specialty area of their choice. The score increases should be interpreted with caution: These schools may not be representative with regard to mean Step 1 scores and failure rates. Other aspects of curricular transformation and rising national Step 1 scores confound the data. Although the optimal timing of Step 1 has yet to be determined, this article summarizes the perspectives of eight schools that changed Step 1 timing, filling a gap in the literature on this important topic.

  16. Real-time in-flight thrust calculation on a digital electronic engine control-equipped F100 engine in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Myers, L. P.

    1984-01-01

    Computer algorithms which calculate in-flight engine and aircraft performance real-time are discussed. The first step was completed with the implementation of a real-time thrust calculation program on a digital electronic engine control (DEEC) equiped F100 engine in an F-15 aircraft. The in-flight thrust modifications that allow calculations to be performed in real-time, to compare results to predictions, are presented.

  17. A transition from using multi-step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies.

    PubMed

    Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-12-01

    The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.

  18. Typical Toddlers' Participation in "Just-in-Time" Programming of Vocabulary for Visual Scene Display Augmentative and Alternative Communication Apps on Mobile Technology: A Descriptive Study.

    PubMed

    Holyfield, Christine; Drager, Kathryn; Light, Janice; Caron, Jessica Gosnell

    2017-08-15

    Augmentative and alternative communication (AAC) promotes communicative participation and language development for young children with complex communication needs. However, the motor, linguistic, and cognitive demands of many AAC technologies restrict young children's operational use of and influence over these technologies. The purpose of the current study is to better understand young children's participation in programming vocabulary "just in time" on an AAC application with minimized demands. A descriptive study was implemented to highlight the participation of 10 typically developing toddlers (M age: 16 months, range: 10-22 months) in just-in-time vocabulary programming in an AAC app with visual scene displays. All 10 toddlers participated in some capacity in adding new visual scene displays and vocabulary to the app just in time. Differences in participation across steps were observed, suggesting variation in the developmental demands of controls involved in vocabulary programming. Results from the current study provide clinical insights toward involving young children in AAC programming just in time and steps that may allow for more independent participation or require more scaffolding. Technology designed to minimize motor, cognitive, and linguistic demands may allow children to participate in programming devices at a younger age.

  19. 40 CFR 35.2025 - Allowance and advance of allowance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... would receive under paragraph (a) of this section. (5) In the event a Step 2+3, Step 3 or Step 7 grant... terms and conditions as it may determine. When the State recovers such advances they shall be added to...

  20. 76 FR 63331 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-12

    ... the closing of the Lakeville, Connecticut post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service...): October 17, 2011; deadline for notices to intervene: October 31, 2011, 4:30 p.m., eastern time. See the...

  1. 76 FR 66336 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-26

    ... the closing of the Ardenvoir, Washington, post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service...): October 28, 2011; deadline for notices to intervene: November 14, 2011, 4:30 p.m., eastern time. See the...

  2. 76 FR 76448 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-07

    ... the closing of the Viola, Idaho post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... Postal Service); December 27, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the...

  3. 76 FR 72453 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... the closing of the Waverly, Washington post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... Postal Service); December 12, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the...

  4. 76 FR 72458 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... the closing of the Pace, Mississippi post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... Postal Service); December 12, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the...

  5. 76 FR 76451 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-07

    ... the closing of the Nixon, Nevada post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... Postal Service); December 27, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the...

  6. 76 FR 63332 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-12

    ... the closing of the Evansdale, Iowa post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... 17, 2011; deadline for notices to intervene: October 31, 2011, 4:30 p.m., eastern time. See the...

  7. 76 FR 72454 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... the closing of the Elmo, Missouri post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... Postal Service); December 12, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the...

  8. 76 FR 66337 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-26

    ... the closing of the Ruth, Mississippi, post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... 28, 2011; deadline for notices to intervene: November 14, 2011, 4:30 p.m., eastern time. See the...

  9. 76 FR 76444 - Post Office Closing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-07

    ... the closing of the Holland, Iowa post office has been filed. It identifies preliminary steps and provides a procedural schedule. Publication of this document will allow the Postal Service, petitioners... Postal Service); December 27, 2011, 4:30 p.m., Eastern Time: Deadline for notices to intervene. See the...

  10. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the upper plate during multiple earthquake cycles at times of hundred thousand and million years and discuss effect of great earthquakes in changing long-term stress field in the upper plate.

  11. Efficient variable time-stepping scheme for intense field-atom interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Kosloff, R.

    1993-03-01

    The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less

  12. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  13. Time series analysis as input for clinical predictive modeling: Modeling cardiac arrest in a pediatric ICU

    PubMed Central

    2011-01-01

    Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778

  14. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    PubMed

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.

  15. Identification of the coupling step in Na(+)-translocating NADH:quinone oxidoreductase from real-time kinetics of electron transfer.

    PubMed

    Belevich, Nikolai P; Bertsova, Yulia V; Verkhovskaya, Marina L; Baykov, Alexander A; Bogachev, Alexander V

    2016-02-01

    Bacterial Na(+)-translocating NADH:quinone oxidoreductase (Na(+)-NQR) uses a unique set of prosthetic redox groups-two covalently bound FMN residues, a [2Fe-2S] cluster, FAD, riboflavin and a Cys4[Fe] center-to catalyze electron transfer from NADH to ubiquinone in a reaction coupled with Na(+) translocation across the membrane. Here we used an ultra-fast microfluidic stopped-flow instrument to determine rate constants and the difference spectra for the six consecutive reaction steps of Vibrio harveyi Na(+)-NQR reduction by NADH. The instrument, with a dead time of 0.25 ms and optical path length of 1 cm allowed collection of visible spectra in 50-μs intervals. By comparing the spectra of reaction steps with the spectra of known redox transitions of individual enzyme cofactors, we were able to identify the chemical nature of most intermediates and the sequence of electron transfer events. A previously unknown spectral transition was detected and assigned to the Cys4[Fe] center reduction. Electron transfer from the [2Fe-2S] cluster to the Cys4[Fe] center and all subsequent steps were markedly accelerated when Na(+) concentration was increased from 20 μM to 25 mM, suggesting coupling of the former step with tight Na(+) binding to or occlusion by the enzyme. An alternating access mechanism was proposed to explain electron transfer between subunits NqrF and NqrC. According to the proposed mechanism, the Cys4[Fe] center is alternatively exposed to either side of the membrane, allowing the [2Fe-2S] cluster of NqrF and the FMN residue of NqrC to alternatively approach the Cys4[Fe] center from different sides of the membrane. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Development of a Training Model for Laparoscopic Common Bile Duct Exploration

    PubMed Central

    Rodríguez, Omaira; Benítez, Gustavo; Sánchez, Renata; De la Fuente, Liliana

    2010-01-01

    Background: Training and experience of the surgical team are fundamental for the safety and success of complex surgical procedures, such as laparoscopic common bile duct exploration. Methods: We describe an inert, simple, very low-cost, and readily available training model. Created using a “black box” and basic medical and surgical material, it allows training in the fundamental steps necessary for laparoscopic biliary tract surgery, namely, (1) intraoperative cholangiography, (2) transcystic exploration, and (3) laparoscopic choledochotomy, and t-tube insertion. Results: The proposed model has allowed for the development of the skills necessary for partaking in said procedures, contributing to its development and diminishing surgery time as the trainee advances down the learning curve. Further studies are directed towards objectively determining the impact of the model on skill acquisition. Conclusion: The described model is simple and readily available allowing for accurate reproduction of the main steps and maneuvers that take place during laparoscopic common bile duct exploration, with the purpose of reducing failure and complications. PMID:20529526

  17. Australia's marine virtual laboratory

    NASA Astrophysics Data System (ADS)

    Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe

    2014-05-01

    In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.

  18. 78 FR 36533 - Information Collection; Submission for OMB Review, Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-18

    ... appropriate next step for RSVP projects. The data submitted at the mid-point each year will also allow CNCS to..., electronic, mechanical, or other technological collection techniques or other forms of information technology... project management responsibilities. Four of the 19 comments specifically noted that grantee time is...

  19. THE INFLUENCE OF MODEL TIME STEP ON THE RELATIVE SENSITIVITY OF POPULATION GROWTH TO SURVIVAL, GROWTH AND REPRODUCTION

    EPA Science Inventory

    Matrix population models are often used to extrapolate from life stage-specific stressor effects on survival and reproduction to population-level effects. Demographic elasticity analysis of a matrix model allows an evaluation of the relative sensitivity of population growth rate ...

  20. Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Wood, William A.; vanLeer, Bram

    1999-01-01

    A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.

  1. Toward a virtual building laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klems, J.H.; Finlayson, E.U.; Olsen, T.H.

    1999-03-01

    In order to achieve in a timely manner the large energy and dollar savings technically possible through improvements in building energy efficiency, it will be necessary to solve the problem of design failure risk. The most economical method of doing this would be to learn to calculate building performance with sufficient detail, accuracy and reliability to avoid design failure. Existing building simulation models (BSM) are a large step in this direction, but are still not capable of this level of modeling. Developments in computational fluid dynamics (CFD) techniques now allow one to construct a road map from present BSM's tomore » a complete building physical model. The most useful first step is a building interior model (BIM) that would allow prediction of local conditions affecting occupant health and comfort. To provide reliable prediction a BIM must incorporate the correct physical boundary conditions on a building interior. Doing so raises a number of specific technical problems and research questions. The solution of these within a context useful for building research and design is not likely to result from other research on CFD, which is directed toward the solution of different types of problems. A six-step plan for incorporating the correct boundary conditions within the context of the model problem of a large atrium has been outlined. A promising strategy for constructing a BIM is the overset grid technique for representing a building space in a CFD calculation. This technique promises to adapt well to building design and allows a step-by-step approach. A state-of-the-art CFD computer code using this technique has been adapted to the problem and can form the departure point for this research.« less

  2. Recent advances in high-order WENO finite volume methods for compressible multiphase flows

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael

    2013-10-01

    We present two new families of better than second order accurate Godunov-type finite volume methods for the solution of nonlinear hyperbolic partial differential equations with nonconservative products. One family is based on a high order Arbitrary-Lagrangian-Eulerian (ALE) formulation on moving meshes, which allows to resolve the material contact wave in a very sharp way when the mesh is moved at the speed of the material interface. The other family of methods is based on a high order Adaptive Mesh Refinement (AMR) strategy, where the mesh can be strongly refined in the vicinity of the material interface. Both classes of schemes have several building blocks in common, in particular: a high order WENO reconstruction operator to obtain high order of accuracy in space; the use of an element-local space-time Galerkin predictor step which evolves the reconstruction polynomials in time and that allows to reach high order of accuracy in time in one single step; the use of a path-conservative approach to treat the nonconservative terms of the PDE. We show applications of both methods to the Baer-Nunziato model for compressible multiphase flows.

  3. Two biomechanical strategies for locomotor adaptation to split-belt treadmill walking in subjects with and without transtibial amputation.

    PubMed

    Selgrade, Brian P; Toney, Megan E; Chang, Young-Hui

    2017-02-28

    Locomotor adaptation is commonly studied using split-belt treadmill walking, in which each foot is placed on a belt moving at a different speed. As subjects adapt to split-belt walking, they reduce metabolic power, but the biomechanical mechanism behind this improved efficiency is unknown. Analyzing mechanical work performed by the legs and joints during split-belt adaptation could reveal this mechanism. Because ankle work in the step-to-step transition is more efficient than hip work, we hypothesized that control subjects would reduce hip work on the fast belt and increase ankle work during the step-to-step transition as they adapted. We further hypothesized that subjects with unilateral, trans-tibial amputation would instead increase propulsive work from their intact leg on the slow belt. Control subjects reduced hip work and shifted more ankle work to the step-to-step transition, supporting our hypothesis. Contrary to our second hypothesis, intact leg work, ankle work and hip work in amputees were unchanged during adaptation. Furthermore, all subjects increased collisional energy loss on the fast belt, but did not increase propulsive work. This was possible because subjects moved further backward during fast leg single support in late adaptation than in early adaptation, compensating by reducing backward movement in slow leg single support. In summary, subjects used two strategies to improve mechanical efficiency in split-belt walking adaptation: a CoM displacement strategy that allows for less forward propulsion on the fast belt; and, an ankle timing strategy that allows efficient ankle work in the step-to-step transition to increase while reducing inefficient hip work. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Numerical difficulties and computational procedures for thermo-hydro-mechanical coupled problems of saturated porous media

    NASA Astrophysics Data System (ADS)

    Simoni, L.; Secchi, S.; Schrefler, B. A.

    2008-12-01

    This paper analyses the numerical difficulties commonly encountered in solving fully coupled numerical models and proposes a numerical strategy apt to overcome them. The proposed procedure is based on space refinement and time adaptivity. The latter, which in mainly studied here, is based on the use of a finite element approach in the space domain and a Discontinuous Galerkin approximation within each time span. Error measures are defined for the jump of the solution at each time station. These constitute the parameters allowing for the time adaptivity. Some care is however, needed for a useful definition of the jump measures. Numerical tests are presented firstly to demonstrate the advantages and shortcomings of the method over the more traditional use of finite differences in time, then to assess the efficiency of the proposed procedure for adapting the time step. The proposed method reveals its efficiency and simplicity to adapt the time step in the solution of coupled field problems.

  5. Demonstrating Enzyme Activation: Calcium/Calmodulin Activation of Phosphodiesterase

    ERIC Educational Resources Information Center

    Porta, Angela R.

    2004-01-01

    Demonstrating the steps of a signal transduction cascade usually involves radioactive materials and thus precludes its use in undergraduate teaching labs. Developing labs that allow the visual demonstration of these steps without the use of radioactivity is important for allowing students hands-on methods of illustrating each step of a signal…

  6. Overcoming the detection bandwidth limit in precision spectroscopy: The analytical apparatus function for a stepped frequency scan

    NASA Astrophysics Data System (ADS)

    Rohart, François

    2017-01-01

    In a previous paper [Rohart et al., Phys Rev A 2014;90(042506)], the influence of detection-bandwidth properties on observed line-shapes in precision spectroscopy was theoretically modeled for the first time using the basic model of a continuous sweeping of the laser frequency. Specific experiments confirmed general theoretical trends but also revealed several insufficiencies of the model in case of stepped frequency scans. As a consequence in as much as up-to-date experiments use step-by-step frequency-swept lasers, a new model of the influence of the detection-bandwidth is developed, including a realistic timing of signal sampling and frequency changes. Using Fourier transform techniques, the resulting time domain apparatus function gets a simple analytical form that can be easily implemented in line-shape fitting codes without any significant increase of computation durations. This new model is then considered in details for detection systems characterized by 1st and 2nd order bandwidths, underlining the importance of the ratio of detection time constant to frequency step duration, namely for the measurement of line frequencies. It also allows a straightforward analysis of corresponding systematic deviations on retrieved line frequencies and broadenings. Finally, a special attention is paid to consequences of a finite detection-bandwidth in Doppler Broadening Thermometry, namely to experimental adjustments required for a spectroscopic determination of the Boltzmann constant at the 1-ppm level of accuracy. In this respect, the interest of implementing a Butterworth 2nd order filter is emphasized.

  7. Cingi Steps for preoperative computer-assisted image editing before reduction rhinoplasty.

    PubMed

    Cingi, Can Cemal; Cingi, Cemal; Bayar Muluk, Nuray

    2014-04-01

    The aim of this work is to provide a stepwise systematic guide for a preoperative photo-editing procedure for rhinoplasty cases involving the cooperation of a graphic artist and a surgeon. One hundred female subjects who planned to undergo a reduction rhinoplasty operation were included in this study. The Cingi Steps for Preoperative Computer Imaging (CS-PCI) program, a stepwise systematic guide for image editing using Adobe PhotoShop's "liquify" effect, was applied to the rhinoplasty candidates. The stages of CS-PCI are as follows: (1) lowering the hump; (2) shortening the nose; (3) adjusting the tip projection, (4) perfecting the nasal dorsum, (5) creating a supratip break, and (6) exaggerating the tip projection and/or dorsal slope. Performing the Cingi Steps allows the patient to see what will happen during the operation and observe the final appearance of his or her nose. After the application of described steps, 71 patients (71%) accepted step 4, and 21 (21%) of them accepted step 5. Only 10 patients (10%) wanted to make additional changes to their operation plans. The main benefits of using this method is that it decreases the time needed by the surgeon to perform a graphic analysis, and it reduces the time required for the patient to reach a decision about the procedure. It is an easy and reliable method that will provide improved physician-patient communication, increased patient confidence, and enhanced surgical planning while limiting the time needed for planning. © 2014 ARS-AAOA, LLC.

  8. Quantitation of next generation sequencing library preparation protocol efficiencies using droplet digital PCR assays - a systematic comparison of DNA library preparation kits for Illumina sequencing.

    PubMed

    Aigrain, Louise; Gu, Yong; Quail, Michael A

    2016-06-13

    The emergence of next-generation sequencing (NGS) technologies in the past decade has allowed the democratization of DNA sequencing both in terms of price per sequenced bases and ease to produce DNA libraries. When it comes to preparing DNA sequencing libraries for Illumina, the current market leader, a plethora of kits are available and it can be difficult for the users to determine which kit is the most appropriate and efficient for their applications; the main concerns being not only cost but also minimal bias, yield and time efficiency. We compared 9 commercially available library preparation kits in a systematic manner using the same DNA sample by probing the amount of DNA remaining after each protocol steps using a new droplet digital PCR (ddPCR) assay. This method allows the precise quantification of fragments bearing either adaptors or P5/P7 sequences on both ends just after ligation or PCR enrichment. We also investigated the potential influence of DNA input and DNA fragment size on the final library preparation efficiency. The overall library preparations efficiencies of the libraries show important variations between the different kits with the ones combining several steps into a single one exhibiting some final yields 4 to 7 times higher than the other kits. Detailed ddPCR data also reveal that the adaptor ligation yield itself varies by more than a factor of 10 between kits, certain ligation efficiencies being so low that it could impair the original library complexity and impoverish the sequencing results. When a PCR enrichment step is necessary, lower adaptor-ligated DNA inputs leads to greater amplification yields, hiding the latent disparity between kits. We describe a ddPCR assay that allows us to probe the efficiency of the most critical step in the library preparation, ligation, and to draw conclusion on which kits is more likely to preserve the sample heterogeneity and reduce the need of amplification.

  9. Developing and Modifying Behavioral Coding Schemes in Pediatric Psychology: A Practical Guide

    PubMed Central

    McMurtry, C. Meghan; Chambers, Christine T.; Bakeman, Roger

    2015-01-01

    Objectives To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. Methods This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. Results A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Conclusions Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. PMID:25416837

  10. A collaborative approach to lean laboratory workstation design reduces wasted technologist travel.

    PubMed

    Yerian, Lisa M; Seestadt, Joseph A; Gomez, Erron R; Marchant, Kandice K

    2012-08-01

    Lean methodologies have been applied in many industries to reduce waste. We applied Lean techniques to redesign laboratory workstations with the aim of reducing the number of times employees must leave their workstations to complete their tasks. At baseline in 68 workflows (aggregates or sequence of process steps) studied, 251 (38%) of 664 tasks required workers to walk away from their workstations. After analysis and redesign, only 59 (9%) of the 664 tasks required technologists to leave their workstations to complete these tasks. On average, 3.4 travel events were removed for each workstation. Time studies in a single laboratory section demonstrated that workers spend 8 to 70 seconds in travel each time they step away from the workstation. The redesigned workstations will allow employees to spend less time travelling around the laboratory. Additional benefits include employee training in waste identification, improved overall laboratory layout, and identification of other process improvement opportunities in our laboratory.

  11. Numerical algorithm comparison for the accurate and efficient computation of high-incidence vortical flow

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    1991-01-01

    Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.

  12. Real-Time Label-Free Direct Electronic Monitoring of Topoisomerase Enzyme Binding Kinetics on Graphene.

    PubMed

    Zuccaro, Laura; Tesauro, Cinzia; Kurkina, Tetiana; Fiorani, Paola; Yu, Hak Ki; Knudsen, Birgitta R; Kern, Klaus; Desideri, Alessandro; Balasubramanian, Kannan

    2015-11-24

    Monolayer graphene field-effect sensors operating in liquid have been widely deployed for detecting a range of analyte species often under equilibrium conditions. Here we report on the real-time detection of the binding kinetics of the essential human enzyme, topoisomerase I interacting with substrate molecules (DNA probes) that are immobilized electrochemically on to monolayer graphene strips. By monitoring the field-effect characteristics of the graphene biosensor in real-time during the enzyme-substrate interactions, we are able to decipher the surface binding constant for the cleavage reaction step of topoisomerase I activity in a label-free manner. Moreover, an appropriate design of the capture probes allows us to distinctly follow the cleavage step of topoisomerase I functioning in real-time down to picomolar concentrations. The presented results are promising for future rapid screening of drugs that are being evaluated for regulating enzyme activity.

  13. Quantum transport with long-range steps on Watts-Strogatz networks

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Xu, Xin-Jian

    2016-07-01

    We study transport dynamics of quantum systems with long-range steps on the Watts-Strogatz network (WSN) which is generated by rewiring links of the regular ring. First, we probe physical systems modeled by the discrete nonlinear schrödinger (DNLS) equation. Using the localized initial condition, we compute the time-averaged occupation probability of the initial site, which is related to the nonlinearity, the long-range steps and rewiring links. Self-trapping transitions occur at large (small) nonlinear parameters for coupling ɛ=-1 (1), as long-range interactions are intensified. The structure disorder induced by random rewiring, however, has dual effects for ɛ=-1 and inhibits the self-trapping behavior for ɛ=1. Second, we investigate continuous-time quantum walks (CTQW) on the regular ring ruled by the discrete linear schrödinger (DLS) equation. It is found that only the presence of the long-range steps does not affect the efficiency of the coherent exciton transport, while only the allowance of random rewiring enhances the partial localization. If both factors are considered simultaneously, localization is greatly strengthened, and the transport becomes worse.

  14. Increasing accuracy of pulse transit time measurements by automated elimination of distorted photoplethysmography waves.

    PubMed

    van Velzen, Marit H N; Loeve, Arjo J; Niehof, Sjoerd P; Mik, Egbert G

    2017-11-01

    Photoplethysmography (PPG) is a widely available non-invasive optical technique to visualize pressure pulse waves (PWs). Pulse transit time (PTT) is a physiological parameter that is often derived from calculations on ECG and PPG signals and is based on tightly defined characteristics of the PW shape. PPG signals are sensitive to artefacts. Coughing or movement of the subject can affect PW shapes that much that the PWs become unsuitable for further analysis. The aim of this study was to develop an algorithm that automatically and objectively eliminates unsuitable PWs. In order to develop a proper algorithm for eliminating unsuitable PWs, a literature study was conducted. Next, a '7Step PW-Filter' algorithm was developed that applies seven criteria to determine whether a PW matches the characteristics required to allow PTT calculation. To validate whether the '7Step PW-Filter' eliminates only and all unsuitable PWs, its elimination results were compared to the outcome of manual elimination of unsuitable PWs. The '7Step PW-Filter' had a sensitivity of 96.3% and a specificity of 99.3%. The overall accuracy of the '7Step PW-Filter' for detection of unsuitable PWs was 99.3%. Compared to manual elimination, using the '7Step PW-Filter' reduces PW elimination times from hours to minutes and helps to increase the validity, reliability and reproducibility of PTT data.

  15. Image Processing for Bioluminescence Resonance Energy Transfer Measurement-BRET-Analyzer.

    PubMed

    Chastagnier, Yan; Moutin, Enora; Hemonnot, Anne-Laure; Perroy, Julie

    2017-01-01

    A growing number of tools now allow live recordings of various signaling pathways and protein-protein interaction dynamics in time and space by ratiometric measurements, such as Bioluminescence Resonance Energy Transfer (BRET) Imaging. Accurate and reproducible analysis of ratiometric measurements has thus become mandatory to interpret quantitative imaging. In order to fulfill this necessity, we have developed an open source toolset for Fiji- BRET-Analyzer -allowing a systematic analysis, from image processing to ratio quantification. We share this open source solution and a step-by-step tutorial at https://github.com/ychastagnier/BRET-Analyzer. This toolset proposes (1) image background subtraction, (2) image alignment over time, (3) a composite thresholding method of the image used as the denominator of the ratio to refine the precise limits of the sample, (4) pixel by pixel division of the images and efficient distribution of the ratio intensity on a pseudocolor scale, and (5) quantification of the ratio mean intensity and standard variation among pixels in chosen areas. In addition to systematize the analysis process, we show that the BRET-Analyzer allows proper reconstitution and quantification of the ratiometric image in time and space, even from heterogeneous subcellular volumes. Indeed, analyzing twice the same images, we demonstrate that compared to standard analysis BRET-Analyzer precisely define the luminescent specimen limits, enlightening proficient strengths from small and big ensembles over time. For example, we followed and quantified, in live, scaffold proteins interaction dynamics in neuronal sub-cellular compartments including dendritic spines, for half an hour. In conclusion, BRET-Analyzer provides a complete, versatile and efficient toolset for automated reproducible and meaningful image ratio analysis.

  16. A policy framework for accelerating adoption of new vaccines

    PubMed Central

    Hajjeh, Rana; Wecker, John; Cherian, Thomas; O'Brien, Katherine L; Knoll, Maria Deloria; Privor-Dumm, Lois; Kvist, Hans; Nanni, Angeline; Bear, Allyson P; Santosham, Mathuram

    2010-01-01

    Rapid uptake of new vaccines can improve health and wealth and contribute to meeting Millennium Development Goals. In the past, however, the introduction and use of new vaccines has been characterized by delayed uptake in the countries where the need is greatest. Based on experience with accelerating the adoption of Hib, pneumococcal and rotavirus vaccines, we propose here a framework for new vaccine adoption that may be useful for future efforts. The framework organizes the major steps in the process into a continuum from evidence to policy, implementation and finally access. It highlights the important roles of different actors at various times in the process and may allow new vaccine initiatives to save time and improve their efficiency by anticipating key steps and actions. PMID:21150269

  17. A policy framework for accelerating adoption of new vaccines.

    PubMed

    Levine, Orin S; Hajjeh, Rana; Wecker, John; Cherian, Thomas; O'Brien, Katherine L; Knoll, Maria Deloria; Privor-Dumm, Lois; Kvist, Hans; Nanni, Angeline; Bear, Allyson P; Santosham, Mathuram

    2010-12-01

    Rapid uptake of new vaccines can improve health and wealth and contribute to meeting Millennium Development Goals. In the past, however, the introduction and use of new vaccines has been characterized by delayed uptake in the countries where the need is greatest. Based on experience with accelerating the adoption of Hib, pneumococcal and rotavirus vaccines, we propose here a framework for new vaccine adoption that may be useful for future efforts. The framework organizes the major steps in the process into a continuum from evidence to policy, implementation and finally access. It highlights the important roles of different actors at various times in the process and may allow new vaccine initiatives to save time and improve their efficiency by anticipating key steps and actions.

  18. 4 Steps to Combat Malware Enterprisewide

    ERIC Educational Resources Information Center

    Zeltser, Lenny

    2011-01-01

    Too often, organizations make the mistake of treating malware infections as a series of independent occurrences. Each time a malicious program is discovered, IT simply cleans up or rebuilds the affected host, and then moves on with routine operational tasks. Yet, this approach doesn't allow the institution to keep up with the increasingly…

  19. Toward Modeling the Learner's Personality Using Educational Games

    ERIC Educational Resources Information Center

    Essalmi, Fathi; Tlili, Ahmed; Ben Ayed, Leila Jemni; Jemmi, Mohamed

    2017-01-01

    Learner modeling is a crucial step in the learning personalization process. It allows taking into consideration the learner's profile to make the learning process more efficient. Most studies refer to an explicit method, namely questionnaire, to model learners. Questionnaires are time consuming and may not be motivating for learners. Thus, this…

  20. Well-balanced compressible cut-cell simulation of atmospheric flow.

    PubMed

    Klein, R; Bates, K R; Nikiforakis, N

    2009-11-28

    Cut-cell meshes present an attractive alternative to terrain-following coordinates for the representation of topography within atmospheric flow simulations, particularly in regions of steep topographic gradients. In this paper, we present an explicit two-dimensional method for the numerical solution on such meshes of atmospheric flow equations including gravitational sources. This method is fully conservative and allows for time steps determined by the regular grid spacing, avoiding potential stability issues due to arbitrarily small boundary cells. We believe that the scheme is unique in that it is developed within a dimensionally split framework, in which each coordinate direction in the flow is solved independently at each time step. Other notable features of the scheme are: (i) its conceptual and practical simplicity, (ii) its flexibility with regard to the one-dimensional flux approximation scheme employed, and (iii) the well-balancing of the gravitational sources allowing for stable simulation of near-hydrostatic flows. The presented method is applied to a selection of test problems including buoyant bubble rise interacting with geometry and lee-wave generation due to topography.

  1. Programmable bioelectronics in a stimuli-encoded 3D graphene interface

    NASA Astrophysics Data System (ADS)

    Parlak, Onur; Beyazit, Selim; Tse-Sum-Bui, Bernadette; Haupt, Karsten; Turner, Anthony P. F.; Tiwari, Ashutosh

    2016-05-01

    The ability to program and mimic the dynamic microenvironment of living organisms is a crucial step towards the engineering of advanced bioelectronics. Here, we report for the first time a design for programmable bioelectronics, with `built-in' switchable and tunable bio-catalytic performance that responds simultaneously to appropriate stimuli. The designed bio-electrodes comprise light and temperature responsive compartments, which allow the building of Boolean logic gates (i.e. ``OR'' and ``AND'') based on enzymatic communications to deliver logic operations.The ability to program and mimic the dynamic microenvironment of living organisms is a crucial step towards the engineering of advanced bioelectronics. Here, we report for the first time a design for programmable bioelectronics, with `built-in' switchable and tunable bio-catalytic performance that responds simultaneously to appropriate stimuli. The designed bio-electrodes comprise light and temperature responsive compartments, which allow the building of Boolean logic gates (i.e. ``OR'' and ``AND'') based on enzymatic communications to deliver logic operations. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr02355j

  2. Microfluidic DNA sample preparation method and device

    DOEpatents

    Krulevitch, Peter A.; Miles, Robin R.; Wang, Xiao-Bo; Mariella, Raymond P.; Gascoyne, Peter R. C.; Balch, Joseph W.

    2002-01-01

    Manipulation of DNA molecules in solution has become an essential aspect of genetic analyses used for biomedical assays, the identification of hazardous bacterial agents, and in decoding the human genome. Currently, most of the steps involved in preparing a DNA sample for analysis are performed manually and are time, labor, and equipment intensive. These steps include extraction of the DNA from spores or cells, separation of the DNA from other particles and molecules in the solution (e.g. dust, smoke, cell/spore debris, and proteins), and separation of the DNA itself into strands of specific lengths. Dielectrophoresis (DEP), a phenomenon whereby polarizable particles move in response to a gradient in electric field, can be used to manipulate and separate DNA in an automated fashion, considerably reducing the time and expense involved in DNA analyses, as well as allowing for the miniaturization of DNA analysis instruments. These applications include direct transport of DNA, trapping of DNA to allow for its separation from other particles or molecules in the solution, and the separation of DNA into strands of varying lengths.

  3. Cross-current leaching of indium from end-of-life LCD panels.

    PubMed

    Rocchetti, Laura; Amato, Alessia; Fonti, Viviana; Ubaldini, Stefano; De Michelis, Ida; Kopacek, Bernd; Vegliò, Francesco; Beolchini, Francesca

    2015-08-01

    Indium is a critical element mainly produced as a by-product of zinc mining, and it is largely used in the production process of liquid crystal display (LCD) panels. End-of-life LCDs represent a possible source of indium in the field of urban mining. In the present paper, we apply, for the first time, cross-current leaching to mobilize indium from end-of-life LCD panels. We carried out a series of treatments to leach indium. The best leaching conditions for indium were 2M sulfuric acid at 80°C for 10min, which allowed us to completely mobilize indium. Taking into account the low content of indium in end-of-life LCDs, of about 100ppm, a single step of leaching is not cost-effective. We tested 6 steps of cross-current leaching: in the first step indium leaching was complete, whereas in the second step it was in the range of 85-90%, and with 6 steps it was about 50-55%. Indium concentration in the leachate was about 35mg/L after the first step of leaching, almost 2-fold at the second step and about 3-fold at the fifth step. Then, we hypothesized to scale up the process of cross-current leaching up to 10 steps, followed by cementation with zinc to recover indium. In this simulation, the process of indium recovery was advantageous from an economic and environmental point of view. Indeed, cross-current leaching allowed to concentrate indium, save reagents, and reduce the emission of CO2 (with 10 steps we assessed that the emission of about 90kg CO2-Eq. could be avoided) thanks to the recovery of indium. This new strategy represents a useful approach for secondary production of indium from waste LCD panels. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. A spectral radius scaling semi-implicit iterative time stepping method for reactive flow simulations with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin

    2018-09-01

    A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.

  5. Comparison of IMRT planning with two-step and one-step optimization: a strategy for improving therapeutic gain and reducing the integral dose

    NASA Astrophysics Data System (ADS)

    Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.

    2009-12-01

    The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.

  6. [A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].

    PubMed

    Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki

    2016-03-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.

  7. The Crank Nicolson Time Integrator for EMPHASIS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGregor, Duncan Alisdair Odum; Love, Edward; Kramer, Richard Michael Jack

    2018-03-01

    We investigate the use of implicit time integrators for finite element time domain approxi- mations of Maxwell's equations in vacuum. We discretize Maxwell's equations in time using Crank-Nicolson and in 3D space using compatible finite elements. We solve the system by taking a single step of Newton's method and inverting the Eddy-Current Schur complement allowing for the use of standard preconditioning techniques. This approach also generalizes to more complex material models that can include the Unsplit PML. We present verification results and demonstrate performance at CFL numbers up to 1000.

  8. Microchip integrating magnetic nanoparticles for allergy diagnosis.

    PubMed

    Teste, Bruno; Malloggi, Florent; Siaugue, Jean-Michel; Varenne, Anne; Kanoufi, Frederic; Descroix, Stéphanie

    2011-12-21

    We report on the development of a simple and easy to use microchip dedicated to allergy diagnosis. This microchip combines both the advantages of homogeneous immunoassays i.e. species diffusion and heterogeneous immunoassays i.e. easy separation and preconcentration steps. In vitro allergy diagnosis is based on specific Immunoglobulin E (IgE) quantitation, in that way we have developed and integrated magnetic core-shell nanoparticles (MCSNPs) as an IgE capture nanoplatform in a microdevice taking benefit from both their magnetic and colloidal properties. Integrating such immunosupport allows to perform the target analyte (IgE) capture in the colloidal phase thus increasing the analyte capture kinetics since both immunological partners are diffusing during the immune reaction. This colloidal approach improves 1000 times the analyte capture kinetics compared to conventional methods. Moreover, based on the MCSNPs' magnetic properties and on the magnetic chamber we have previously developed the MCSNPs and therefore the target can be confined and preconcentrated within the microdevice prior to the detection step. The MCSNPs preconcentration factor achieved was about 35,000 and allows to reach high sensitivity thus avoiding catalytic amplification during the detection step. The developed microchip offers many advantages: the analytical procedure was fully integrated on-chip, analyses were performed in short assay time (20 min), the sample and reagents consumption was reduced to few microlitres (5 μL) while a low limit of detection can be achieved (about 1 ng mL(-1)).

  9. A colorimetric probe based on desensitized ionene-stabilized gold nanoparticles for single-step test for sulfate ions.

    PubMed

    Arkhipova, Viktoriya V; Apyari, Vladimir V; Dmitrienko, Stanislava G

    2015-03-15

    Desensitized ionene-stabilized gold nanoparticles have been prepared and applied as a colorimetric probe for the single-step test for sulfate ions at the relatively high concentration level. The approach is based on aggregation of the nanoparticles leading to the change in absorption spectra and color of the solution. These nanoparticles are characterized by the decreased sensitivity due to both electrostatic and steric stabilization, which allows for simple, and rapid direct single-step determination of sulfate at the relatively high concentration level in real water samples without sample pretreatment or dilution. Influence of different factors (the time of interaction, pH, the concentrations of sulfate ions and the nanoparticles) on the aggregation and analytical performance of the procedure was investigated. The method allows for the determination of sulfate ions in the mass range of 0.2-0.4 mg with RSD of 5% from the sample volume of less than 2 mL. It has a sharp dependence of the colorimetric response on the concentration of sulfate, which makes it prospective for indicating deviations of the sulfate concentration regarding some declared value chosen within the above range. The time of the analysis is 2 min. The method was applied to the analysis of mineral water samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Extraction of Qualitative Features from Sensor Data Using Windowed Fourier Transform

    NASA Technical Reports Server (NTRS)

    Amini, Abolfazl M.; Figueroa, Fenando

    2003-01-01

    In this paper, we use Matlab to model the health monitoring of a system through the information gathered from sensors. This implies assessment of the condition of the system components. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of an element, a qualitative change, or a change due to a problem with another element in the network. For example, if one sensor indicates that the temperature in the tank has experienced a step change then a pressure sensor associated with the process in the tank should also experience a step change. The step up and step down as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system is allowed to run for a period equal to three time constant of the main process before changes occur. Then each point of the signal is selected with a trailing data collected previously. Two trailing lengths of data are selected, one equal to two time constants of the main process and the other equal to two time constants of the sensor disturbance. Next, the DC is removed from each set of data and then the data are passed through a window followed by calculation of spectra for each set. In order to extract features the signal power, peak, and spectrum are plotted vs time. The results indicate distinct shapes corresponding to each process. The study is also carried out for a number of Gaussian distributed noisy cases.

  11. A transition from using multi‐step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies

    PubMed Central

    Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-01-01

    Abstract Introduction The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos® CELLEX® fully integrated system in 2012. This report summarizes our single‐center experience of transitioning from the use of multi‐step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. Materials and Methods The total number of ECP procedures performed 2011–2015 was derived from department records. The time taken to complete a single ECP treatment using a multi‐step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time‐driven activity‐based costing methods were applied to provide a cost comparison. Results The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi‐step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per‐session cost of performing ECP using the multi‐step procedure was greater than with the CELLEX® system (€1,429.37 and €1,264.70 per treatment, respectively). Conclusions For hospitals considering a transition from multi‐step procedures to fully integrated methods for ECP where cost may be a barrier, time‐driven activity‐based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX® allow for more patient treatments per year. PMID:28419561

  12. elPrep: High-Performance Preparation of Sequence Alignment/Map Files for Variant Calling

    PubMed Central

    Decap, Dries; Fostier, Jan; Reumers, Joke

    2015-01-01

    elPrep is a high-performance tool for preparing sequence alignment/map files for variant calling in sequencing pipelines. It can be used as a replacement for SAMtools and Picard for preparation steps such as filtering, sorting, marking duplicates, reordering contigs, and so on, while producing identical results. What sets elPrep apart is its software architecture that allows executing preparation pipelines by making only a single pass through the data, no matter how many preparation steps are used in the pipeline. elPrep is designed as a multithreaded application that runs entirely in memory, avoids repeated file I/O, and merges the computation of several preparation steps to significantly speed up the execution time. For example, for a preparation pipeline of five steps on a whole-exome BAM file (NA12878), we reduce the execution time from about 1:40 hours, when using a combination of SAMtools and Picard, to about 15 minutes when using elPrep, while utilising the same server resources, here 48 threads and 23GB of RAM. For the same pipeline on whole-genome data (NA12878), elPrep reduces the runtime from 24 hours to less than 5 hours. As a typical clinical study may contain sequencing data for hundreds of patients, elPrep can remove several hundreds of hours of computing time, and thus substantially reduce analysis time and cost. PMID:26182406

  13. LENMODEL: A forward model for calculating length distributions and fission-track ages in apatite

    NASA Astrophysics Data System (ADS)

    Crowley, Kevin D.

    1993-05-01

    The program LENMODEL is a forward model for annealing of fission tracks in apatite. It provides estimates of the track-length distribution, fission-track age, and areal track density for any user-supplied thermal history. The program approximates the thermal history, in which temperature is represented as a continuous function of time, by a series of isothermal steps of various durations. Equations describing the production of tracks as a function of time and annealing of tracks as a function of time and temperature are solved for each step. The step calculations are summed to obtain estimates for the entire thermal history. Computational efficiency is maximized by performing the step calculations backwards in model time. The program incorporates an intuitive and easy-to-use graphical interface. Thermal history is input to the program using a mouse. Model options are specified by selecting context-sensitive commands from a bar menu. The program allows for considerable selection of equations and parameters used in the calculations. The program was written for PC-compatible computers running DOS TM 3.0 and above (and Windows TM 3.0 or above) with VGA or SVGA graphics and a Microsoft TM-compatible mouse. Single copies of a runtime version of the program are available from the author by written request as explained in the last section of this paper.

  14. Diffractive optics fabricated by direct write methods with an electron beam

    NASA Technical Reports Server (NTRS)

    Kress, Bernard; Zaleta, David; Daschner, Walter; Urquhart, Kris; Stein, Robert; Lee, Sing H.

    1993-01-01

    State-of-the-art diffractive optics are fabricated using e-beam lithography and dry etching techniques to achieve multilevel phase elements with very high diffraction efficiencies. One of the major challenges encountered in fabricating diffractive optics is the small feature size (e.g. for diffractive lenses with small f-number). It is not only the e-beam system which dictates the feature size limitations, but also the alignment systems (mask aligner) and the materials (e-beam and photo resists). In order to allow diffractive optics to be used in new optoelectronic systems, it is necessary not only to fabricate elements with small feature sizes but also to do so in an economical fashion. Since price of a multilevel diffractive optical element is closely related to the e-beam writing time and the number of etching steps, we need to decrease the writing time and etching steps without affecting the quality of the element. To do this one has to utilize the full potentials of the e-beam writing system. In this paper, we will present three diffractive optics fabrication techniques which will reduce the number of process steps, the writing time, and the overall fabrication time for multilevel phase diffractive optics.

  15. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  16. How to build the practice you want, not just the practice that walks through the door.

    PubMed

    Jackson, Rem

    2013-01-01

    The first step in transforming your patient mix is to recognize that it is appropriate to prefer to see a specific type of patient over other types. You may enjoy working in a particular practice niche more than others, and there may be financial considerations as well. The second step is to change your marketing so that it focuses on your preferred, or "perfect," patient. This strategy will allow you to build a patient database that will enable you to change, over time, the type of patients you are seeing in your practice.

  17. An information-carrying and knowledge-producing molecular machine. A Monte-Carlo simulation.

    PubMed

    Kuhn, Christoph

    2012-02-01

    The concept called Knowledge is a measure of the quality of genetically transferred information. Its usefulness is demonstrated quantitatively in a Monte-Carlo simulation on critical steps in a origin of life model. The model describes the origin of a bio-like genetic apparatus by a long sequence of physical-chemical steps: it starts with the presence of a self-replicating oligomer and a specifically structured environment in time and space that allow for the formation of aggregates such as assembler-hairpins-devices and, at a later stage, an assembler-hairpins-enzyme device-a first translation machine.

  18. Mixed Linear/Square-Root Encoded Single-Slope Ramp Provides Low-Noise ADC with High Linearity for Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Wrigley, Chris J.; Hancock, Bruce R.; Newton, Kenneth W.; Cunningham, Thomas J.

    2013-01-01

    Single-slope analog-to-digital converters (ADCs) are particularly useful for onchip digitization in focal plane arrays (FPAs) because of their inherent monotonicity, relative simplicity, and efficiency for column-parallel applications, but they are comparatively slow. Squareroot encoding can allow the number of code values to be reduced without loss of signal-to-noise ratio (SNR) by keeping the quantization noise just below the signal shot noise. This encoding can be implemented directly by using a quadratic ramp. The reduction in the number of code values can substantially increase the quantization speed. However, in an FPA, the fixed pattern noise (FPN) limits the use of small quantization steps at low signal levels. If the zero-point is adjusted so that the lowest column is onscale, the other columns, including those at the center of the distribution, will be pushed up the ramp where the quantization noise is higher. Additionally, the finite frequency response of the ramp buffer amplifier and the comparator distort the shape of the ramp, so that the effective ramp value at the time the comparator trips differs from the intended value, resulting in errors. Allowing increased settling time decreases the quantization speed, while increasing the bandwidth increases the noise. The FPN problem is solved by breaking the ramp into two portions, with some fraction of the available code values allocated to a linear ramp and the remainder to a quadratic ramp. To avoid large transients, both the value and the slope of the linear and quadratic portions should be equal where they join. The span of the linear portion must cover the minimum offset, but not necessarily the maximum, since the fraction of the pixels above the upper limit will still be correctly quantized, albeit with increased quantization noise. The required linear span, maximum signal and ratio of quantization noise to shot noise at high signal, along with the continuity requirement, determines the number of code values that must be allocated to each portion. The distortion problem is solved by using a lookup table to convert captured code values back to signal levels. The values in this table will be similar to the intended ramp value, but with a correction for the finite bandwidth effects. Continuous-time comparators are used, and their bandwidth is set below the step rate, which smoothes the ramp and reduces the noise. No settling time is needed, as would be the case for clocked comparators, but the low bandwidth enhances the distortion of the non-linear portion. This is corrected by use of a return lookup table, which differs from the one used to generate the ramp. The return lookup table is obtained by calibrating against a stepped precision DC reference. This results in a residual non-linearity well below the quantization noise. This method can also compensate for differential non-linearity (DNL) in the DAC used to generate the ramp. The use of a ramp with a combination of linear and quadratic portions for a single-slope ADC is novel. The number of steps is minimized by keeping the step size just below the photon shot noise. This in turn maximizes the speed of the conversion. High resolution is maintained by keeping small quantization steps at low signals, and noise is minimized by allowing the lowest analog bandwidth, all without increasing the quantization noise. A calibrated return lookup table allows the system to maintain excellent linearity.

  19. Simulation of Planetary Formation using Python

    NASA Astrophysics Data System (ADS)

    Bufkin, James; Bixler, David

    2015-03-01

    A program to simulate planetary formation was developed in the Python programming language. The program consists of randomly placed and massed bodies surrounding a central massive object in order to approximate a protoplanetary disk. The orbits of these bodies are time-stepped, with accelerations, velocities and new positions calculated in each step. Bodies are allowed to merge if their disks intersect. Numerous parameters (orbital distance, masses, number of particles, etc.) were varied in order to optimize the program. The program uses an iterative difference equation approach to solve the equations of motion using a kinematic model. Conservation of energy and angular momentum are not specifically forced, but conservation of momentum is forced during the merging of bodies. The initial program was created in Visual Python (VPython) but the current intention is to allow for higher particle count and faster processing by utilizing PyOpenCl and PyOpenGl. Current results and progress will be reported.

  20. Effects of Imperfect Dynamic Clamp: Computational and Experimental Results

    PubMed Central

    Bettencourt, Jonathan C.; Lillis, Kyle P.; White, John A.

    2008-01-01

    In the dynamic clamp technique, a typically nonlinear feedback system delivers electrical current to an excitable cell that represents the actions of “virtual” ion channels (e.g., channels that are gated by local membrane potential or by electrical activity in neighboring biological or virtual neurons). Since the conception of this technique, there have been a number of different implementations of dynamic clamp systems, each with differing levels of flexibility and performance. Embedded hardware-based systems typically offer feedback that is very fast and precisely timed, but these systems are often expensive and sometimes inflexible. PC-based systems, on the other hand, allow the user to write software that defines an arbitrarily complex feedback system, but real-time performance in PC-based systems can be deteriorated by imperfect real-time performance. Here we systematically evaluate the performance requirements for artificial dynamic clamp knock-in of transient sodium and delayed rectifier potassium conductances. Specifically we examine the effects of controller time step duration, differential equation integration method, jitter (variability in time step), and latency (the time lag from reading inputs to updating outputs). Each of these control system flaws is artificially introduced in both simulated and real dynamic clamp experiments. We demonstrate that each of these errors affect dynamic clamp accuracy in a way that depends on the time constants and stiffness of the differential equations being solved. In simulations, time steps above 0.2 ms lead to catastrophic alteration of spike shape, but the frequency-vs.-current relationship is much more robust. Latency (the part of the time step that occurs between measuring membrane potential and injecting re-calculated membrane current) is a crucial factor as well. Experimental data are substantially more sensitive to inaccuracies than simulated data. PMID:18076999

  1. Community detection using Kernel Spectral Clustering with memory

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Suykens, Johan A. K.

    2013-02-01

    This work is related to the problem of community detection in dynamic scenarios, which for instance arises in the segmentation of moving objects, clustering of telephone traffic data, time-series micro-array data etc. A desirable feature of a clustering model which has to capture the evolution of communities over time is the temporal smoothness between clusters in successive time-steps. In this way the model is able to track the long-term trend and in the same time it smooths out short-term variation due to noise. We use the Kernel Spectral Clustering with Memory effect (MKSC) which allows to predict cluster memberships of new nodes via out-of-sample extension and has a proper model selection scheme. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness as a valid prior knowledge. The latter, in fact, allows the model to cluster the current data well and to be consistent with the recent history. Here we propose a generalization of the MKSC model with an arbitrary memory, not only one time-step in the past. The experiments conducted on toy problems confirm our expectations: the more memory we add to the model, the smoother over time are the clustering results. We also compare with the Evolutionary Spectral Clustering (ESC) algorithm which is a state-of-the art method, and we obtain comparable or better results.

  2. [Investigation of stages of chemical leaching and biooxidation during the extraction of gold from sulfide concentrates].

    PubMed

    Murav'ev, M I; Fomchenko, N V; Kondrat'eva, T V

    2015-01-01

    We examined the chemical leaching and biooxidation stages in a two-stage biooxidation process of an auriferous sulfide concentrate containing pyrrhotite, arsenopyrite and pyrite. Chemical leaching of the concentrate (slurry density at 200 g/L) by ferric sulfate biosolvent (initial concentration at 35.6 g/L), which was obtained by microbial oxidation of ferrous sulfate for 2 hours at 70°C at pH 1.4, was allowed to oxidize 20.4% ofarsenopyrite and 52.1% of sulfur. The most effective biooxidation of chemically leached concentrate was observed at 45°C in the presence of yeast extract. Oxidation of the sulfide concentrate in a two-step process proceeded more efficiently than in one-step. In a two-step mode, gold extraction from the precipitate was 10% higher and the content of elemental sulfur was two times lower than in a one-step process.

  3. Developing and modifying behavioral coding schemes in pediatric psychology: a practical guide.

    PubMed

    Chorney, Jill MacLaren; McMurtry, C Meghan; Chambers, Christine T; Bakeman, Roger

    2015-01-01

    To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. The oscillatory behavior of the CoM facilitates mechanical energy balance between push-off and heel strike.

    PubMed

    Kim, Seyoung; Park, Sukyung

    2012-01-10

    Humans use equal push-off and heel strike work during the double support phase to minimize the mechanical work done on the center of mass (CoM) during the gait. Recently, a step-to-step transition was reported to occur over a period of time greater than that of the double support phase, which brings into question whether the energetic optimality is sensitive to the definition of the step-to-step transition. To answer this question, the ground reaction forces (GRFs) of seven normal human subjects walking at four different speeds (1.1-2.4 m/s) were measured, and the push-off and heel strike work for three differently defined step-to-step transitions were computed based on the force, work, and velocity. To examine the optimality of the work and the impulse data, a hybrid theoretical-empirical analysis is presented using a dynamic walking model that allows finite time for step-to-step transitions and incorporates the effects of gravity within this period. The changes in the work and impulse were examined parametrically across a range of speeds. The results showed that the push-off work on the CoM was well balanced by the heel strike work for all three definitions of the step-to-step transition. The impulse data were well matched by the optimal impulse predictions (R(2)>0.7) that minimized the mechanical work done on the CoM during the gait. The results suggest that the balance of push-off and heel strike energy is a consistent property arising from the overall gait dynamics, which implies an inherited oscillatory behavior of the CoM, possibly by spring-like leg mechanics. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. A step-by-step solution for embedding user-controlled cines into educational Web pages.

    PubMed

    Cornfeld, Daniel

    2008-03-01

    The objective of this article is to introduce a simple method for embedding user-controlled cines into a Web page using a simple JavaScript. Step-by-step instructions are included and the source code is made available. This technique allows the creation of portable Web pages that allow the user to scroll through cases as if seated at a PACS workstation. A simple JavaScript allows scrollable image stacks to be included on Web pages. With this technique, you can quickly and easily incorporate entire stacks of CT or MR images into online teaching files. This technique has the potential for use in case presentations, online didactics, teaching archives, and resident testing.

  6. Propulsion and Power Rapid Response Research and Development (R&D) Support. Delivery Order 0011: Analysis of Synthetic Aviation Fuels

    DTIC Science & Technology

    2011-04-01

    from a solvent bottle 4) Allow the k-cell to drain thoroughly 7 Approved for public release; distribution unlimited. 5) Submerge the k-cell into a...beaker filled with isopropanol. Do not submerge the BNC connectors of the k-cell. 6) Remove the k-cell from the isopropanol. 7) Repeat steps 5-6...two more times 8) Allow the k-cell to drain thoroughly. 9) Submerge the k-cell into a second beaker filled with isopropanol. Do not submerge the BNC

  7. Gaussian Radial Basis Function for Efficient Computation of Forest Indirect Illumination

    NASA Astrophysics Data System (ADS)

    Abbas, Fayçal; Babahenini, Mohamed Chaouki

    2018-06-01

    Global illumination of natural scenes in real time like forests is one of the most complex problems to solve, because the multiple inter-reflections between the light and material of the objects composing the scene. The major problem that arises is the problem of visibility computation. In fact, the computing of visibility is carried out for all the set of leaves visible from the center of a given leaf, given the enormous number of leaves present in a tree, this computation performed for each leaf of the tree which also reduces performance. We describe a new approach that approximates visibility queries, which precede in two steps. The first step is to generate point cloud representing the foliage. We assume that the point cloud is composed of two classes (visible, not-visible) non-linearly separable. The second step is to perform a point cloud classification by applying the Gaussian radial basis function, which measures the similarity in term of distance between each leaf and a landmark leaf. It allows approximating the visibility requests to extract the leaves that will be used to calculate the amount of indirect illumination exchanged between neighbor leaves. Our approach allows efficiently treat the light exchanges in the scene of a forest, it allows a fast computation and produces images of good visual quality, all this takes advantage of the immense power of computation of the GPU.

  8. A new theory for multistep discretizations of stiff ordinary differential equations: Stability with large step sizes

    NASA Technical Reports Server (NTRS)

    Majda, G.

    1985-01-01

    A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.

  9. Implementation of a web-based medication tracking system in a large academic medical center.

    PubMed

    Calabrese, Sam V; Williams, Jonathan P

    2012-10-01

    Pharmacy workflow efficiencies achieved through the use of an electronic medication-tracking system are described. Medication dispensing turnaround times at the inpatient pharmacy of a large hospital were evaluated before and after transition from manual medication tracking to a Web-based tracking process involving sequential bar-code scanning and real-time monitoring of medication status. The transition was carried out in three phases: (1) a workflow analysis, including the identification of optimal points for medication scanning with hand-held wireless devices, (2) the phased implementation of an automated solution and associated hardware at a central dispensing pharmacy and three satellite locations, and (3) postimplementation data collection to evaluate the impact of the new tracking system and areas for improvement. Relative to the manual tracking method, electronic medication tracking allowed the capture of far more data points, enabling the pharmacy team to delineate the time required for each step of the medication dispensing process and to identify the steps most likely to involve delays. A comparison of baseline and postimplementation data showed substantial reductions in overall medication turnaround times with the use of the Web-based tracking system (time reductions of 45% and 22% at the central and satellite sites, respectively). In addition to more accurate projections and documentation of turnaround times, the Web-based tracking system has facilitated quality-improvement initiatives. Implementation of an electronic tracking system for monitoring the delivery of medications provided a comprehensive mechanism for calculating turnaround times and allowed the pharmacy to identify bottlenecks within the medication distribution system. Altering processes removed these bottlenecks and decreased delivery turnaround times.

  10. Analysis of Cocaine, Heroin, and their Metabolites in Saliva

    DTIC Science & Technology

    1990-07-10

    in Table 1. Table 1 - HPLC Conditions Column: Alltech /Applied Science Econosphere C8, 250 x 4.6 mm Solvent A: 0.1M Ammonium Acetate Solvent B: 10:90...concentration step is quite time consuming and often results in large losses of sample. LC/MS has the advantage over GC/MS in allowing the analysis of

  11. Predictable 'meta-mechanisms' emerge from feedbacks between transpiration and plant growth and cannot be simply deduced from short-term mechanisms.

    PubMed

    Tardieu, François; Parent, Boris

    2017-06-01

    Growth under water deficit is controlled by short-term mechanisms but, because of numerous feedbacks, the combination of these mechanisms over time often results in outputs that cannot be deduced from the simple inspection of individual mechanisms. It can be analysed with dynamic models in which causal relationships between variables are considered at each time-step, allowing calculation of outputs that are routed back to inputs for the next time-step and that can change the system itself. We first review physiological mechanisms involved in seven feedbacks of transpiration on plant growth, involving changes in tissue hydraulic conductance, stomatal conductance, plant architecture and underlying factors such as hormones or aquaporins. The combination of these mechanisms over time can result in non-straightforward conclusions as shown by examples of simulation outputs: 'over production of abscisic acid (ABA) can cause a lower concentration of ABA in the xylem sap ', 'decreasing root hydraulic conductance when evaporative demand is maximum can improve plant performance' and 'rapid root growth can decrease yield'. Systems of equations simulating feedbacks over numerous time-steps result in logical and reproducible emergent properties that can be viewed as 'meta-mechanisms' at plant level, which have similar roles as mechanisms at cell level. © 2016 John Wiley & Sons Ltd.

  12. Automated touch screen device for recording complex rodent behaviors

    PubMed Central

    Mabrouk, O.S.; Dripps, I.J.; Ramani, S.; Chang, C.; Han, J.L.; Rice, KC; Jutkiewicz, E.M.

    2016-01-01

    Background Monitoring mouse behavior is a critical step in the development of modern pharmacotherapies. New Method Here we describe the application of a novel method that utilizes a touch display computer (tablet) and software to detect, record, and report fine motor behaviors. A consumer-grade tablet device is placed in the bottom of a specially made acrylic cage allowing the animal to walk on the device (MouseTrapp). We describe its application in open field (for general locomotor studies) which measures step lengths and velocity. The device can perform light-dark (anxiety) tests by illuminating half of the screen and keeping the other half darkened. A divider is built into the lid of the device allowing the animal free access to either side. Results Treating mice with amphetamine and the delta opioid peptide receptor agonist SNC80 stimulated locomotor activity on the device. Amphetamine increased step velocity but not step length during its peak effect (40–70 min after treatment), thus indicating detection of subtle amphetamine-induced effects. Animals showed a preference (74% of time spent) for the darkened half compared to the illuminated side. Comparison with Existing Method Animals were videotaped within the chamber to compare quadrant crosses to detected motion on the device. The slope, duration and magnitude of quadrant crosses tightly correlated with overall locomotor activity as detected by Mousetrapp. Conclusions We suggest that modern touch display devices such as MouseTrapp will be an important step toward automation of behavioral analyses for characterizing phenotypes and drug effects. PMID:24952323

  13. Surfactant-controlled polymerization of semiconductor clusters to quantum dots through competing step-growth and living chain-growth mechanisms.

    PubMed

    Evans, Christopher M; Love, Alyssa M; Weiss, Emily A

    2012-10-17

    This article reports control of the competition between step-growth and living chain-growth polymerization mechanisms in the formation of cadmium chalcogenide colloidal quantum dots (QDs) from CdSe(S) clusters by varying the concentration of anionic surfactant in the synthetic reaction mixture. The growth of the particles proceeds by step-addition from initially nucleated clusters in the absence of excess phosphinic or carboxylic acids, which adsorb as their anionic conjugate bases, and proceeds indirectly by dissolution of clusters, and subsequent chain-addition of monomers to stable clusters (Ostwald ripening) in the presence of excess phosphinic or carboxylic acid. Fusion of clusters by step-growth polymerization is an explanation for the consistent observation of so-called "magic-sized" clusters in QD growth reactions. Living chain-addition (chain addition with no explicit termination step) produces QDs over a larger range of sizes with better size dispersity than step-addition. Tuning the molar ratio of surfactant to Se(2-)(S(2-)), the limiting ionic reagent, within the living chain-addition polymerization allows for stoichiometric control of QD radius without relying on reaction time.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isotalo, Aarno

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  15. A marker-free system for the analysis of movement disabilities.

    PubMed

    Legrand, L; Marzani, F; Dusserre, L

    1998-01-01

    A major step toward improving the treatments of disabled persons may be achieved by using motion analysis equipment. We are developing such a system. It allows the analysis of plane human motion (e.g. gait) without using the tracking of markers. The system is composed of one fixed camera which acquires an image sequence of a human in motion. Then the treatment is divided into two steps: first, a large number of pixels belonging to the boundaries of the human body are extracted at each acquisition time. Secondly, a two-dimensional model of the human body, based on tapered superquadrics, is successively matched with the sets of pixels previously extracted; a specific fuzzy clustering process is used for this purpose. Moreover, an optical flow procedure gives a prediction of the model location at each acquisition time from its location at the previous time. Finally we present some results of this process applied to a leg in motion.

  16. Exploding Nitromethane in Silico, in Real Time.

    PubMed

    Fileti, Eudes Eterno; Chaban, Vitaly V; Prezhdo, Oleg V

    2014-10-02

    Nitromethane (NM) is widely applied in chemical technology as a solvent for extraction, cleaning, and chemical synthesis. NM was considered safe for a long time, until a railroad tanker car exploded in 1958. We investigate the detonation kinetics and explosion reaction mechanisms in a variety of systems consisting of NM, molecular oxygen, and water vapor. Reactive molecular dynamics allows us to simulate reactions in time-domain, as they occur in real life. High polarity of the NM molecule is shown to play a key role, driving the first exothermic step of the reaction. Rapid temperature and pressure growth stimulate the subsequent reaction steps. Oxygen is important for faster oxidation, whereas its optimal concentration is in agreement with the proposed reaction mechanism. Addition of water (50 mol %) inhibits detonation; however, water does not prevent detonation entirely. The reported results provide important insights for improving applications of NM and preserving the safety of industrial processes.

  17. Typical Toddlers' Participation in “Just-in-Time” Programming of Vocabulary for Visual Scene Display Augmentative and Alternative Communication Apps on Mobile Technology: A Descriptive Study

    PubMed Central

    Drager, Kathryn; Light, Janice; Caron, Jessica Gosnell

    2017-01-01

    Purpose Augmentative and alternative communication (AAC) promotes communicative participation and language development for young children with complex communication needs. However, the motor, linguistic, and cognitive demands of many AAC technologies restrict young children's operational use of and influence over these technologies. The purpose of the current study is to better understand young children's participation in programming vocabulary “just in time” on an AAC application with minimized demands. Method A descriptive study was implemented to highlight the participation of 10 typically developing toddlers (M age: 16 months, range: 10–22 months) in just-in-time vocabulary programming in an AAC app with visual scene displays. Results All 10 toddlers participated in some capacity in adding new visual scene displays and vocabulary to the app just in time. Differences in participation across steps were observed, suggesting variation in the developmental demands of controls involved in vocabulary programming. Conclusions Results from the current study provide clinical insights toward involving young children in AAC programming just in time and steps that may allow for more independent participation or require more scaffolding. Technology designed to minimize motor, cognitive, and linguistic demands may allow children to participate in programming devices at a younger age. PMID:28586825

  18. Graphical modeling and query language for hospitals.

    PubMed

    Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris

    2013-01-01

    So far there has been little evidence that implementation of the health information technologies (HIT) is leading to health care cost savings. One of the reasons for this lack of impact by the HIT likely lies in the complexity of the business process ownership in the hospitals. The goal of our research is to develop a business model-based method for hospital use which would allow doctors to retrieve directly the ad-hoc information from various hospital databases. We have developed a special domain-specific process modelling language called the MedMod. Formally, we define the MedMod language as a profile on UML Class diagrams, but we also demonstrate it on examples, where we explain the semantics of all its elements informally. Moreover, we have developed the Process Query Language (PQL) that is based on MedMod process definition language. The purpose of PQL is to allow a doctor querying (filtering) runtime data of hospital's processes described using MedMod. The MedMod language tries to overcome deficiencies in existing process modeling languages, allowing to specify the loosely-defined sequence of the steps to be performed in the clinical process. The main advantages of PQL are in two main areas - usability and efficiency. They are: 1) the view on data through "glasses" of familiar process, 2) the simple and easy-to-perceive means of setting filtering conditions require no more expertise than using spreadsheet applications, 3) the dynamic response to each step in construction of the complete query that shortens the learning curve greatly and reduces the error rate, and 4) the selected means of filtering and data retrieving allows to execute queries in O(n) time regarding the size of the dataset. We are about to continue developing this project with three further steps. First, we are planning to develop user-friendly graphical editors for the MedMod process modeling and query languages. The second step is to do evaluation of usability the proposed language and tool involving the physicians from several hospitals in Latvia and working with real data from these hospitals. Our third step is to develop an efficient implementation of the query language.

  19. A training approach to improve stepping automaticity while dual-tasking in Parkinson's disease: A prospective pilot study.

    PubMed

    Chomiak, Taylor; Watts, Alexander; Meyer, Nicole; Pereira, Fernando V; Hu, Bin

    2017-02-01

    Deficits in motor movement automaticity in Parkinson's disease (PD), especially during multitasking, are early and consistent hallmarks of cognitive function decline, which increases fall risk and reduces quality of life. This study aimed to test the feasibility and potential efficacy of a wearable sensor-enabled technological platform designed for an in-home music-contingent stepping-in-place (SIP) training program to improve step automaticity during dual-tasking (DT). This was a 4-week prospective intervention pilot study. The intervention uses a sensor system and algorithm that runs off the iPod Touch which calculates step height (SH) in real-time. These measurements were then used to trigger auditory (treatment group, music; control group, radio podcast) playback in real-time through wireless headphones upon maintenance of repeated large amplitude stepping. With small steps or shuffling, auditory playback stops, thus allowing participants to use anticipatory motor control to regain positive feedback. Eleven participants were recruited from an ongoing trial (Trial Number: ISRCTN06023392). Fear of falling (FES-I), general cognitive functioning (MoCA), self-reported freezing of gait (FOG-Q), and DT step automaticity were evaluated. While we found no significant effect of training on FES-I, MoCA, or FOG-Q, we did observe a significant group (music vs podcast) by training interaction in DT step automaticity (P<0.01). Wearable device technology can be used to enable musically-contingent SIP training to increase motor automaticity for people living with PD. The training approach described here can be implemented at home to meet the growing demand for self-management of symptoms by patients.

  20. Method and apparatus for combinatorial logic signal processor in a digitally based high speed x-ray spectrometer

    DOEpatents

    Warburton, William K.; Zhou, Zhiquing

    1999-01-01

    A high speed, digitally based, signal processing system which accepts a digitized input signal and detects the presence of step-like pulses in the this data stream, extracts filtered estimates of their amplitudes, inspects for pulse pileup, and records input pulse rates and system livetime. The system has two parallel processing channels: a slow channel, which filters the data stream with a long time constant trapezoidal filter for good energy resolution; and a fast channel which filters the data stream with a short time constant trapezoidal filter, detects pulses, inspects for pileups, and captures peak values from the slow channel for good events. The presence of a simple digital interface allows the system to be easily integrated with a digital processor to produce accurate spectra at high count rates and allow all spectrometer functions to be fully automated. Because the method is digitally based, it allows pulses to be binned based on time related values, as well as on their amplitudes, if desired.

  1. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting.

    PubMed

    Rashed-Ul Islam, S M; Jahan, Munira; Tabassum, Shahina

    2015-01-01

    Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 10 3 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 10 3 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log 10 IU/ml and limits of agreement of -1.82 to 3.03 log 10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log 10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15.

  2. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting

    PubMed Central

    Jahan, Munira; Tabassum, Shahina

    2015-01-01

    Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 103 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 103 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log10 IU/ml and limits of agreement of -1.82 to 3.03 log10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. How to cite this article Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15. PMID:29201678

  3. Controlling dental enamel-cavity ablation depth with optimized stepping parameters along the focal plane normal using a three axis, numerically controlled picosecond laser.

    PubMed

    Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong

    2015-02-01

    The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.

  4. Index extraction for electromagnetic field evaluation of high power wireless charging system.

    PubMed

    Park, SangWook

    2017-01-01

    This paper presents the precise dosimetry for highly resonant wireless power transfer (HR-WPT) system using an anatomically realistic human voxel model. The dosimetry for the HR-WPT system designed to operate at 13.56 MHz frequency, which one of the ISM band frequency band, is conducted in the various distances between the human model and the system, and in the condition of alignment and misalignment between transmitting and receiving circuits. The specific absorption rates in the human body are computed by the two-step approach; in the first step, the field generated by the HR-WPT system is calculated and in the second step the specific absorption rates are computed with the scattered field finite-difference time-domain method regarding the fields obtained in the first step as the incident fields. The safety compliance for non-uniform field exposure from the HR-WPT system is discussed with the international safety guidelines. Furthermore, the coupling factor concept is employed to relax the maximum allowable transmitting power. Coupling factors derived from the dosimetry results are presented. In this calculation, the external magnetic field from the HR-WPT system can be relaxed by approximately four times using coupling factor in the worst exposure scenario.

  5. Economic Efficiency and Investment Timing for Dual Water Systems

    NASA Astrophysics Data System (ADS)

    Leconte, Robert; Hughes, Trevor C.; Narayanan, Rangesan

    1987-10-01

    A general methodology to evaluate the economic feasibility of dual water systems is presented. In a first step, a static analysis (evaluation at a single point in time) is developed. The analysis requires the evaluation of consumers' and producer's surpluses from water use and the capital cost of the dual (outdoor) system. The analysis is then extended to a dynamic approach where the water demand increases with time (as a result of a population increase) and where the dual system is allowed to expand. The model determines whether construction of a dual system represents a net benefit, and if so, what is the best time to initiate the system (corresponding to maximization of social welfare). Conditions under which an analytic solution is possible are discussed and results of an application are summarized (including sensitivity to different parameters). The analysis allows identification of key parameters influencing attractiveness of dual water systems.

  6. Optimizing python-based ROOT I/O with PyPy's tracing just-in-time compiler

    NASA Astrophysics Data System (ADS)

    Tlp Lavrijsen, Wim

    2012-12-01

    The Python programming language allows objects and classes to respond dynamically to the execution environment. Most of this, however, is made possible through language hooks which by definition can not be optimized and thus tend to be slow. The PyPy implementation of Python includes a tracing just in time compiler (JIT), which allows similar dynamic responses but at the interpreter-, rather than the application-level. Therefore, it is possible to fully remove the hooks, leaving only the dynamic response, in the optimization stage for hot loops, if the types of interest are opened up to the JIT. A general opening up of types to the JIT, based on reflection information, has already been developed (cppyy). The work described in this paper takes it one step further by customizing access to ROOT I/O to the JIT, allowing for fully automatic optimizations.

  7. Lecture capturing assisted teaching and learning experience

    NASA Astrophysics Data System (ADS)

    Chen, Li

    2015-03-01

    When it comes to learning, a deep understanding of the material and a broadband of knowledge are equally important. However, provided limited amount of semester time, instructors often find themselves struggling to reach both aspects at the same time and are often forced to make a choice between the two. On one hand, we would like to spend much time to train our students, with demonstrations, step by step guidance and practice, to develop strong critical thinking skills and problem-solving skills. On the other hand, we also would like to cover a wide range of content topics to broaden our students' understanding. In this presentation, we propose a working scheme that may assist to achieve these two goals at the same time without sacrificing either one. With the help of recorded and pre-recorded lectures and other class materials, it allows instructors to spend more class time to focus on developing critical thinking skills and problem-solving skills, and to apply and connect principle knowledge with real life phenomena. It also allows our students to digest the material at a pace they are comfortable with by watching the recorded lectures over and over. Students now have something as a backup to refer to when they have random mistakes and/or missing spots on their notes, and hence take more ownership of their learning. Advanced technology have offered flexibility of how/when the content can be delivered, and have been assisting towards better teaching and learning strategies.

  8. Real-time implementation of logo detection on open source BeagleBoard

    NASA Astrophysics Data System (ADS)

    George, M.; Kehtarnavaz, N.; Estevez, L.

    2011-03-01

    This paper presents the real-time implementation of our previously developed logo detection and tracking algorithm on the open source BeagleBoard mobile platform. This platform has an OMAP processor that incorporates an ARM Cortex processor. The algorithm combines Scale Invariant Feature Transform (SIFT) with k-means clustering, online color calibration and moment invariants to robustly detect and track logos in video. Various optimization steps that are carried out to allow the real-time execution of the algorithm on BeagleBoard are discussed. The results obtained are compared to the PC real-time implementation results.

  9. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  10. Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.

    PubMed

    Zheng, Mo; Li, Xiaoxia; Guo, Li

    2013-04-01

    Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. A versatile semi-permanent sequential bilayer/diblock polymer coating for capillary isoelectric focusing.

    PubMed

    Bahnasy, Mahmoud F; Lucy, Charles A

    2012-12-07

    A sequential surfactant bilayer/diblock copolymer coating was previously developed for the separation of proteins. The coating is formed by flushing the capillary with the cationic surfactant dioctadecyldimethylammonium bromide (DODAB) followed by the neutral polymer poly-oxyethylene (POE) stearate. Herein we show the method development and optimization for capillary isoelectric focusing (cIEF) separations based on the developed sequential coating. Electroosmotic flow can be tuned by varying the POE chain length which allows optimization of resolution and analysis time. DODAB/POE 40 stearate can be used to perform single-step cIEF, while both DODAB/POE 40 and DODAB/POE 100 stearate allow performing two-step cIEF methodologies. A set of peptide markers is used to assess the coating performance. The sequential coating has been applied successfully to cIEF separations using different capillary lengths and inner diameters. A linear pH gradient is established only in two-step CIEF methodology using 3-10 pH 2.5% (v/v) carrier ampholyte. Hemoglobin A(0) and S variants are successfully resolved on DODAB/POE 40 stearate sequentially coated capillaries. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Krylov Deferred Correction Accelerated Method of Lines Transpose for Parabolic Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, Jun; Jingfang, Huang

    2008-01-01

    In this paper, a new class of numerical methods for the accurate and efficient solutions of parabolic partial differential equations is presented. Unlike traditional method of lines (MoL), the new {\\bf \\it Krylov deferred correction (KDC) accelerated method of lines transpose (MoL^T)} first discretizes the temporal direction using Gaussian type nodes and spectral integration, and symbolically applies low-order time marching schemes to form a preconditioned elliptic system, which is then solved iteratively using Newton-Krylov techniques such as Newton-GMRES or Newton-BiCGStab method. Each function evaluation in the Newton-Krylov method is simply one low-order time-stepping approximation of the error by solving amore » decoupled system using available fast elliptic equation solvers. Preliminary numerical experiments show that the KDC accelerated MoL^T technique is unconditionally stable, can be spectrally accurate in both temporal and spatial directions, and allows optimal time-step sizes in long-time simulations.« less

  13. Depicting Changes in Multiple Symptoms Over Time.

    PubMed

    Muehrer, Rebecca J; Brown, Roger L; Lanuza, Dorothy M

    2015-09-01

    Ridit analysis, an acronym for Relative to an Identified Distribution, is a method for assessing change in ordinal data and can be used to show how individual symptoms change or remain the same over time. The purposes of this article are to (a) describe how to use ridit analysis to assess change in a symptom measure using data from a longitudinal study, (b) give a step-by-step example of ridit analysis, (c) show the clinical relevance of applying ridit analysis, and (d) display results in an innovative graphic. Mean ridit effect sizes were calculated for the frequency and distress of 64 symptoms in lung transplant patients before and after transplant. Results were displayed in a bubble graph. Ridit analysis allowed us to maintain the specificity of individual symptoms and to show how each symptom changed or remained the same over time. The bubble graph provides an efficient way for clinicians to identify changes in symptom frequency and distress over time. © The Author(s) 2014.

  14. Fully implicit adaptive mesh refinement solver for 2D MHD

    NASA Astrophysics Data System (ADS)

    Philip, B.; Chacon, L.; Pernice, M.

    2008-11-01

    Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)

  15. Single-pass incremental force updates for adaptively restrained molecular dynamics.

    PubMed

    Singh, Krishna Kant; Redon, Stephane

    2018-03-30

    Adaptively restrained molecular dynamics (ARMD) allows users to perform more integration steps in wall-clock time by switching on and off positional degrees of freedoms. This article presents new, single-pass incremental force updates algorithms to efficiently simulate a system using ARMD. We assessed different algorithms for speedup measurements and implemented them in the LAMMPS MD package. We validated the single-pass incremental force update algorithm on four different benchmarks using diverse pair potentials. The proposed algorithm allows us to perform simulation of a system faster than traditional MD in both NVE and NVT ensembles. Moreover, ARMD using the new single-pass algorithm speeds up the convergence of observables in wall-clock time. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Step-by-Step Simulation of Radiation Chemistry Using Green Functions for Diffusion-Influenced Reactions

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2011-01-01

    Radiolytic species are formed approximately 1 ps after the passage of ionizing radiation through matter. After their formation, they diffuse and chemically react with other radiolytic species and neighboring biological molecules, leading to various oxidative damage. Therefore, the simulation of radiation chemistry is of considerable importance to understand how radiolytic species damage biological molecules [1]. The step-by-step simulation of chemical reactions is difficult, because the radiolytic species are distributed non-homogeneously in the medium. Consequently, computational approaches based on Green functions for diffusion-influenced reactions should be used [2]. Recently, Green functions for more complex type of reactions have been published [3-4]. We have developed exact random variate generators of these Green functions [5], which will allow us to use them in radiation chemistry codes. Moreover, simulating chemistry using the Green functions is which is computationally very demanding, because the probabilities of reactions between each pair of particles should be evaluated at each timestep [2]. This kind of problem is well adapted for General Purpose Graphic Processing Units (GPGPU), which can handle a large number of similar calculations simultaneously. These new developments will allow us to include more complex reactions in chemistry codes, and to improve the calculation time. This code should be of importance to link radiation track structure simulations and DNA damage models.

  17. Proposed phase 2/ step 2 in-vitro test on basis of EN 14561 for standardised testing of the wound antiseptics PVP-iodine, chlorhexidine digluconate, polihexanide and octenidine dihydrochloride.

    PubMed

    Schedler, Kathrin; Assadian, Ojan; Brautferger, Uta; Müller, Gerald; Koburger, Torsten; Classen, Simon; Kramer, Axel

    2017-02-13

    Currently, there is no agreed standard for exploring the antimicrobial activity of wound antiseptics in a phase 2/ step 2 test protocol. In the present study, a standardised in-vitro test is proposed, which allows to test potential antiseptics in a more realistically simulation of conditions found in wounds as in a suspension test. Furthermore, factors potentially influencing test results such as type of materials used as test carrier or various compositions of organic soil challenge were investigated in detail. This proposed phase 2/ step 2 test method was modified on basis of the EN 14561 by drying the microbial test suspension on a metal carrier for 1 h, overlaying the test wound antiseptic, washing-off, neutralization, and dispersion at serial dilutions at the end of the required exposure time yielded reproducible, consistent test results. The difference between the rapid onset of the antiseptic effect of PVP-I and the delayed onset especially of polihexanide was apparent. Among surface-active antimicrobial compounds, octenidine was more effective than chlorhexidine digluconate and polihexanide, with some differences depending on the test organisms. However, octenidine and PVP-I were approximately equivalent in efficiency and microbial spectrum, while polihexanide required longer exposure times or higher concentrations for a comparable antimicrobial efficacy. Overall, this method allowed testing and comparing differ liquid and gel based antimicrobial compounds in a standardised setting.

  18. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    NASA Astrophysics Data System (ADS)

    Särkimäki, K.; Hirvijoki, E.; Terävä, J.

    2018-01-01

    We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.

  19. Benefit from NASA

    NASA Image and Video Library

    2004-04-15

    Marshall Space Flight Center engineers helped North American Marine Jet (NAMJ), Inc. improve the proposed design of a new impeller for jet propulsion system. With a three-dimensional computer model of the new marine jet engine blades, engineers were able to quickly create a solid ploycarbonate model of it. The rapid prototyping allowed the company to avoid many time-consuming and costly steps in creating the impeller.

  20. Benefit from NASA

    NASA Image and Video Library

    1996-01-01

    Marshall space Flight Center engineers helped North American Marine Jet (NAMJ), Inc. improve the proposed design of a new impeller for a jet-propulsion system. With a three-dimensional computer model of the new marine jet engine blades, engineers were able to quickly create a solid polycarbonate model of it. The rapid prototyping allowed the company to avoid many time-consuming and costly steps in creating the impeller.

  1. Exploring the Application of a Conceptual Framework in a Social MALL App

    ERIC Educational Resources Information Center

    Read, Timothy; Bárcena, Elena; Kukulska-Hulme, Agnes

    2016-01-01

    This article presents a prototype social Mobile Assisted Language Learning (henceforth, MALL) app based on Kukulska-Hulme's (2012) conceptual framework. This research allows the exploration of time, place and activity type as key factors in the design of MALL apps, and is the first step toward a systematic analysis of such a framework in this type…

  2. A first-order k-space model for elastic wave propagation in heterogeneous media.

    PubMed

    Firouzi, K; Cox, B T; Treeby, B E; Saffari, N

    2012-09-01

    A pseudospectral model of linear elastic wave propagation is described based on the first order stress-velocity equations of elastodynamics. k-space adjustments to the spectral gradient calculations are derived from the dyadic Green's function solution to the second-order elastic wave equation and used to (a) ensure the solution is exact for homogeneous wave propagation for timesteps of arbitrarily large size, and (b) also allows larger time steps without loss of accuracy in heterogeneous media. The formulation in k-space allows the wavefield to be split easily into compressional and shear parts. A perfectly matched layer (PML) absorbing boundary condition was developed to effectively impose a radiation condition on the wavefield. The staggered grid, which is essential for accurate simulations, is described, along with other practical details of the implementation. The model is verified through comparison with exact solutions for canonical examples and further examples are given to show the efficiency of the method for practical problems. The efficiency of the model is by virtue of the reduced point-per-wavelength requirement, the use of the fast Fourier transform (FFT) to calculate the gradients in k space, and larger time steps made possible by the k-space adjustments.

  3. Distributed Sensor Fusion for Scalar Field Mapping Using Mobile Sensor Networks.

    PubMed

    La, Hung Manh; Sheng, Weihua

    2013-04-01

    In this paper, autonomous mobile sensor networks are deployed to measure a scalar field and build its map. We develop a novel method for multiple mobile sensor nodes to build this map using noisy sensor measurements. Our method consists of two parts. First, we develop a distributed sensor fusion algorithm by integrating two different distributed consensus filters to achieve cooperative sensing among sensor nodes. This fusion algorithm has two phases. In the first phase, the weighted average consensus filter is developed, which allows each sensor node to find an estimate of the value of the scalar field at each time step. In the second phase, the average consensus filter is used to allow each sensor node to find a confidence of the estimate at each time step. The final estimate of the value of the scalar field is iteratively updated during the movement of the mobile sensors via weighted average. Second, we develop the distributed flocking-control algorithm to drive the mobile sensors to form a network and track the virtual leader moving along the field when only a small subset of the mobile sensors know the information of the leader. Experimental results are provided to demonstrate our proposed algorithms.

  4. A Compact Tandem Two-Step Laser Time-of-Flight Mass Spectrometer for In Situ Analysis of Non-Volatile Organics on Planetary Surfaces

    NASA Technical Reports Server (NTRS)

    Getty, Stephanie A.; Brinckerhoff, William B.; Li, Xiang; Elsila, Jamie; Cornish, Timothy; Ecelberger, Scott; Wu, Qinghao; Zare, Richard

    2014-01-01

    Two-step laser desorption mass spectrometry is a well suited technique to the analysis of high priority classes of organics, such as polycyclic aromatic hydrocarbons, present in complex samples. The use of decoupled desorption and ionization laser pulses allows for sensitive and selective detection of structurally intact organic species. We have recently demonstrated the implementation of this advancement in laser mass spectrometry in a compact, flight-compatible instrument that could feasibly be the centerpiece of an analytical science payload as part of a future spaceflight mission to a small body or icy moon.

  5. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Raj, Rahul; Hamm, Nicholas Alexander Samuel; van der Tol, Christiaan; Stein, Alfred

    2016-03-01

    Gross primary production (GPP) can be separated from flux tower measurements of net ecosystem exchange (NEE) of CO2. This is used increasingly to validate process-based simulators and remote-sensing-derived estimates of simulated GPP at various time steps. Proper validation includes the uncertainty associated with this separation. In this study, uncertainty assessment was done in a Bayesian framework. It was applied to data from the Speulderbos forest site, The Netherlands. We estimated the uncertainty in GPP at half-hourly time steps, using a non-rectangular hyperbola (NRH) model for its separation from the flux tower measurements. The NRH model provides a robust empirical relationship between radiation and GPP. It includes the degree of curvature of the light response curve, radiation and temperature. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. We defined the prior distribution of each NRH parameter and used Markov chain Monte Carlo (MCMC) simulation to estimate the uncertainty in the separated GPP from the posterior distribution at half-hourly time steps. This time series also allowed us to estimate the uncertainty at daily time steps. We compared the informative with the non-informative prior distributions of the NRH parameters and found that both choices produced similar posterior distributions of GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  6. Time-elapsed screw insertion with microCT imaging.

    PubMed

    Ryan, M K; Mohtar, A A; Cleek, T M; Reynolds, K J

    2016-01-25

    Time-elapsed analysis of bone is an innovative technique that uses sequential image data to analyze bone mechanics under a given loading regime. This paper presents the development of a novel device capable of performing step-wise screw insertion into excised bone specimens, within the microCT environment, whilst simultaneously recording insertion torque, compression under the screw head and rotation angle. The system is computer controlled and screw insertion is performed in incremental steps of insertion torque. A series of screw insertion tests to failure were performed (n=21) to establish a relationship between the torque at head contact and stripping torque (R(2)=0.89). The test-device was then used to perform step-wise screw insertion, stopping at intervals of 20%, 40%, 60% and 80% between screw head contact and screw stripping. Image data-sets were acquired at each of these time-points as well as at head contact and post-failure. Examination of the image data revealed the trabecular deformation as a result of increased insertion torque was restricted to within 1mm of the outer diameter of the screw thread. Minimal deformation occurred prior to the step between the 80% time-point and post-failure. The device presented has allowed, for the first time, visualization of the micro-mechanical response in the peri-implant bone with increased tightening torque. Further testing on more samples is expected to increase our understanding of the effects of increased tightening torque at the micro-structural level, and the failure mechanisms of trabeculae. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  8. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    NASA Astrophysics Data System (ADS)

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim

    2017-02-01

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.

  9. Modeling of the motion of automobile elastic wheel in real-time for creation of wheeled vehicles motion control electronic systems

    NASA Astrophysics Data System (ADS)

    Balakina, E. V.; Zotov, N. M.; Fedin, A. P.

    2018-02-01

    Modeling of the motion of the elastic wheel of the vehicle in real-time is used in the tasks of constructing different models in the creation of wheeled vehicles motion control electronic systems, in the creation of automobile stand-simulators etc. The accuracy and the reliability of simulation of the parameters of the wheel motion in real-time when rolling with a slip within the given road conditions are determined not only by the choice of the model, but also by the inaccuracy and instability of the numerical calculation. It is established that the inaccuracy and instability of the calculation depend on the size of the step of integration and the numerical method being used. The analysis of these inaccuracy and instability when wheel rolling with a slip was made and recommendations for reducing them were developed. It is established that the total allowable range of steps of integration is 0.001.0.005 s; the strongest instability is manifested in the calculation of the angular and linear accelerations of the wheel; the weakest instability is manifested in the calculation of the translational velocity of the wheel and moving of the center of the wheel; the instability is less at large values of slip angle and on more slippery surfaces. A new method of the average acceleration is suggested, which allows to significantly reduce (up to 100%) the manifesting of instability of the solution in the calculation of all parameters of motion of the elastic wheel for different braking conditions and for the entire range of steps of integration. The results of research can be applied to the selection of control algorithms in vehicles motion control electronic systems and in the testing stand-simulators

  10. A training approach to improve stepping automaticity while dual-tasking in Parkinson's disease

    PubMed Central

    Chomiak, Taylor; Watts, Alexander; Meyer, Nicole; Pereira, Fernando V.; Hu, Bin

    2017-01-01

    Abstract Background: Deficits in motor movement automaticity in Parkinson's disease (PD), especially during multitasking, are early and consistent hallmarks of cognitive function decline, which increases fall risk and reduces quality of life. This study aimed to test the feasibility and potential efficacy of a wearable sensor-enabled technological platform designed for an in-home music-contingent stepping-in-place (SIP) training program to improve step automaticity during dual-tasking (DT). Methods: This was a 4-week prospective intervention pilot study. The intervention uses a sensor system and algorithm that runs off the iPod Touch which calculates step height (SH) in real-time. These measurements were then used to trigger auditory (treatment group, music; control group, radio podcast) playback in real-time through wireless headphones upon maintenance of repeated large amplitude stepping. With small steps or shuffling, auditory playback stops, thus allowing participants to use anticipatory motor control to regain positive feedback. Eleven participants were recruited from an ongoing trial (Trial Number: ISRCTN06023392). Fear of falling (FES-I), general cognitive functioning (MoCA), self-reported freezing of gait (FOG-Q), and DT step automaticity were evaluated. Results: While we found no significant effect of training on FES-I, MoCA, or FOG-Q, we did observe a significant group (music vs podcast) by training interaction in DT step automaticity (P<0.01). Conclusion: Wearable device technology can be used to enable musically-contingent SIP training to increase motor automaticity for people living with PD. The training approach described here can be implemented at home to meet the growing demand for self-management of symptoms by patients. PMID:28151878

  11. Two-step phase-shifting SPIDER

    NASA Astrophysics Data System (ADS)

    Zheng, Shuiqin; Cai, Yi; Pan, Xinjian; Zeng, Xuanke; Li, Jingzhen; Li, Ying; Zhu, Tianlong; Lin, Qinggang; Xu, Shixiang

    2016-09-01

    Comprehensive characterization of ultrafast optical field is critical for ultrashort pulse generation and its application. This paper combines two-step phase-shifting (TSPS) into the spectral phase interferometry for direct electric-field reconstruction (SPIDER) to improve the reconstruction of ultrafast optical-fields. This novel SPIDER can remove experimentally the dc portion occurring in traditional SPIDER method by recording two spectral interferograms with π phase-shifting. As a result, the reconstructed results are much less disturbed by the time delay between the test pulse replicas and the temporal widths of the filter window, thus more reliable. What is more, this SPIDER can work efficiently even the time delay is so small or the measured bandwidth is so narrow that strong overlap happens between the dc and ac portions, which allows it to be able to characterize the test pulses with complicated temporal/spectral structures or narrow bandwidths.

  12. DSP-Based dual-polarity mass spectrum pattern recognition for bio-detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V; Coffee, K; Gard, E

    2006-04-21

    The Bio-Aerosol Mass Spectrometry (BAMS) instrument analyzes single aerosol particles using a dual-polarity time-of-flight mass spectrometer recording simultaneously spectra of thirty to a hundred thousand points on each polarity. We describe here a real-time pattern recognition algorithm developed at Lawrence Livermore National Laboratory that has been implemented on a nine Digital Signal Processor (DSP) system from Signatec Incorporated. The algorithm first preprocesses independently the raw time-of-flight data through an adaptive baseline removal routine. The next step consists of a polarity dependent calibration to a mass-to-charge representation, reducing the data to about five hundred to a thousand channels per polarity. Themore » last step is the identification step using a pattern recognition algorithm based on a library of known particle signatures including threat agents and background particles. The identification step includes integrating the two polarities for a final identification determination using a score-based rule tree. This algorithm, operating on multiple channels per-polarity and multiple polarities, is well suited for parallel real-time processing. It has been implemented on the PMP8A from Signatec Incorporated, which is a computer based board that can interface directly to the two one-Giga-Sample digitizers (PDA1000 from Signatec Incorporated) used to record the two polarities of time-of-flight data. By using optimized data separation, pipelining, and parallel processing across the nine DSPs it is possible to achieve a processing speed of up to a thousand particles per seconds, while maintaining the recognition rate observed on a non-real time implementation. This embedded system has allowed the BAMS technology to improve its throughput and therefore its sensitivity while maintaining a large dynamic range (number of channels and two polarities) thus maintaining the systems specificity for bio-detection.« less

  13. Modeling the stepping mechanism in negative lightning leaders

    NASA Astrophysics Data System (ADS)

    Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir

    2017-04-01

    It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.

  14. Initial growth and topography of 4,4'-biphenyldicarboxylic acid on Cu(001)

    NASA Astrophysics Data System (ADS)

    Poelsema, Bene; Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J. W.

    2013-03-01

    We have investigated nucleation and initial growth of BDA on Cu(001) at 300 - 410K, using LEEM and μLEED. BDA condenses in a 2D supramolecular c(8 ×8) network of lying molecules. The dehydrogenated molecules form hydrogen bonds with perpendicular adjacent ones. First, the adsorbed BDA molecules form a disordered dilute phase and at a sufficiently high density, the c(8 ×8) crystalline phase nucleates. From the equilibrium densities at different temperatures we obtain the 2D phase diagram. The phase coexistence line provides a cohesive energy of 0.35 eV. LEEM allows a detailed study of nucleation and growth of BDA on Cu(001) at low supersaturation. The real time microscopic information allows a direct visualization of near-critical nuclei. At 332 K and a deposition rate of 1.4 x 10-6ML/s we find a critical nucleus size about 600 nm2. The corresponding value obtained from classic nucleation theory corresponds nicely with this direct result. We estimate the Gibbs free energy for nucleation under these conditions at 4 eV. The size fluctuations are an order of magnitude stronger than expected. At 410 K the influence of steps on the growth process becomes evident: domain growth is terminated by steps even when they are permeable for individual molecules. This leads to a novel Mullins-Sekerka type of growth instability: the growth is very fast along the steps and less fast perpendicular to the steps. The large solid angle at the advancing edge of the condensate dictates the high growth rate along the step.

  15. Effect of a Cooling Step Treatment on a High-Voltage GaN LED During ICP Dry Etching

    NASA Astrophysics Data System (ADS)

    Lin, Yen-Sheng; Hsiao, Sheng-Yu; Tseng, Chun-Lung; Shen, Ching-Hsing; Chiang, Jung-Sheng

    2017-02-01

    In this study, a lower dislocation density for a GaN surface and a reduced current path are observed at the interface of a SiO2 isolation sidewall, using high-resolution transmission electron microscopy. This is grown using a 3-min cooling step treatment during inductivity coupled plasma dry etching. The lower forward voltage is measured, the leakage current decreases from 53nA to 32nA, and the maximum output power increases from 354.8 W to 357.2 W for an input current of 30 mA. The microstructure and the optoelectronic properties of high-voltage light-emitting-diodes is proven to be affected by the cooling step treatment, which allows enough time to release the thermal energy of the SiO2 isolation well.

  16. Effects of Convective Transport of Solute and Impurities on Defect-Causing Kinetics Instabilities

    NASA Technical Reports Server (NTRS)

    Vekilov, Peter G.; Higginbotham, Henry Keith (Technical Monitor)

    2001-01-01

    For in-situ studies of the formation and evolution of step patterns during the growth of protein crystals, we have designed and assembled an experimental setup based on Michelson interferometry with the surface of the growing protein crystal as one of the reflective surfaces. The crystallization part of the device allows optical monitoring of a face of a crystal growing at temperature stable within 0.05 C in a developed solution flow of controlled direction and speed. The reference arm of the interferometer contains a liquid-crystal element that allows controlled shifts of the phase of the interferograms. We employ an image processing algorithm which combines five images with a pi/2 phase difference between each pair of images. The images are transferred to a computer by a camera capable of capturing 6-8 frames per second. The device allows data collection data regarding growth over a relatively large area (approximately .3 sq. mm) in-situ and in real time during growth. The estimated dept resolution of the phase shifting interferometry is about 100 A. The lateral resolution, depending on the zoom ratio, varies between 0.3 and 0.6 micrometers. We have now collected quantitative results on the onset, initial stages and development of instabilities in moving step trains on vicinal crystal surfaces at varying supersaturation, position on the facet, crystal size and temperature with the proteins ferritin, apoferritin and thaumatin. Comparisons with theory, especially with the AFM results on the molecular level processes, see below, allow tests of the rational for the effects of convective flows and, as a particular case, the lack thereof, on step bunching.

  17. Advanced 3D image processing techniques for liver and hepatic tumor location and volumetry

    NASA Astrophysics Data System (ADS)

    Chemouny, Stephane; Joyeux, Henri; Masson, Bruno; Borne, Frederic; Jaeger, Marc; Monga, Olivier

    1999-05-01

    To assist radiologists and physicians in diagnosing, and in treatment planning and evaluating in liver oncology, we have developed a fast and accurate segmentation of the liver and its lesions within CT-scan exams. The first step of our method is to reduce spatial resolution of CT images. This will have two effects: obtain near isotropic 3D data space and drastically decrease computational time for further processing. On a second step a 3D non-linear `edge- preserving' smoothing filtering is performed throughout the entire exam. On a third step the 3D regions coming out from the second step are homogeneous enough to allow a quite simple segmentation process, based on morphological operations, under supervisor control, ending up with accurate 3D regions of interest (ROI) of the liver and all the hepatic tumors. On a fourth step the ROIs are eventually set back into the original images, features like volume and location are immediately computed and displayed. The segmentation we get is as precise as a manual one but is much faster.

  18. Using Curved Crystals to Study Terrace-Width Distributions.

    NASA Astrophysics Data System (ADS)

    Einstein, Theodore L.

    Recent experiments on curved crystals of noble and late transition metals (Ortega and Juurlink groups) have renewed interest in terrace width distributions (TWD) for vicinal surfaces. Thus, it is timely to discuss refinements of TWD analysis that are absent from the standard reviews. Rather than by Gaussians, TWDs are better described by the generalized Wigner surmise, with a power-law rise and a Gaussian decay, thereby including effects evident for weak step repulsion: skewness and peak shifts down from the mean spacing. Curved crystals allow analysis of several mean spacings with the same substrate, so that one can check the scaling with the mean width. This is important since such scaling confirms well-established theory. Failure to scale also can provide significant insights. Complicating factors can include step touching (local double-height steps), oscillatory step interactions mediated by metallic (but not topological) surface states, short-range corrections to the inverse-square step repulsion, and accounting for the offset between adjacent layers of almost all surfaces. We discuss how to deal with these issues. For in-plane misoriented steps there are formulas to describe the stiffness but not yet the strength of the elastic interstep repulsion. Supported in part by NSF-CHE 13-05892.

  19. NAIMA: target amplification strategy allowing quantitative on-chip detection of GMOs.

    PubMed

    Morisset, Dany; Dobnik, David; Hamels, Sandrine; Zel, Jana; Gruden, Kristina

    2008-10-01

    We have developed a novel multiplex quantitative DNA-based target amplification method suitable for sensitive, specific and quantitative detection on microarray. This new method named NASBA Implemented Microarray Analysis (NAIMA) was applied to GMO detection in food and feed, but its application can be extended to all fields of biology requiring simultaneous detection of low copy number DNA targets. In a first step, the use of tailed primers allows the multiplex synthesis of template DNAs in a primer extension reaction. A second step of the procedure consists of transcription-based amplification using universal primers. The cRNA product is further on directly ligated to fluorescent dyes labelled 3DNA dendrimers allowing signal amplification and hybridized without further purification on an oligonucleotide probe-based microarray for multiplex detection. Two triplex systems have been applied to test maize samples containing several transgenic lines, and NAIMA has shown to be sensitive down to two target copies and to provide quantitative data on the transgenic contents in a range of 0.1-25%. Performances of NAIMA are comparable to singleplex quantitative real-time PCR. In addition, NAIMA amplification is faster since 20 min are sufficient to achieve full amplification.

  20. NAIMA: target amplification strategy allowing quantitative on-chip detection of GMOs

    PubMed Central

    Morisset, Dany; Dobnik, David; Hamels, Sandrine; Žel, Jana; Gruden, Kristina

    2008-01-01

    We have developed a novel multiplex quantitative DNA-based target amplification method suitable for sensitive, specific and quantitative detection on microarray. This new method named NASBA Implemented Microarray Analysis (NAIMA) was applied to GMO detection in food and feed, but its application can be extended to all fields of biology requiring simultaneous detection of low copy number DNA targets. In a first step, the use of tailed primers allows the multiplex synthesis of template DNAs in a primer extension reaction. A second step of the procedure consists of transcription-based amplification using universal primers. The cRNA product is further on directly ligated to fluorescent dyes labelled 3DNA dendrimers allowing signal amplification and hybridized without further purification on an oligonucleotide probe-based microarray for multiplex detection. Two triplex systems have been applied to test maize samples containing several transgenic lines, and NAIMA has shown to be sensitive down to two target copies and to provide quantitative data on the transgenic contents in a range of 0.1–25%. Performances of NAIMA are comparable to singleplex quantitative real-time PCR. In addition, NAIMA amplification is faster since 20 min are sufficient to achieve full amplification. PMID:18710880

  1. Spectral-based propagation schemes for time-dependent quantum systems with application to carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Chen, Zuojing; Polizzi, Eric

    2010-11-01

    Effective modeling and numerical spectral-based propagation schemes are proposed for addressing the challenges in time-dependent quantum simulations of systems ranging from atoms, molecules, and nanostructures to emerging nanoelectronic devices. While time-dependent Hamiltonian problems can be formally solved by propagating the solutions along tiny simulation time steps, a direct numerical treatment is often considered too computationally demanding. In this paper, however, we propose to go beyond these limitations by introducing high-performance numerical propagation schemes to compute the solution of the time-ordered evolution operator. In addition to the direct Hamiltonian diagonalizations that can be efficiently performed using the new eigenvalue solver FEAST, we have designed a Gaussian propagation scheme and a basis-transformed propagation scheme (BTPS) which allow to reduce considerably the simulation times needed by time intervals. It is outlined that BTPS offers the best computational efficiency allowing new perspectives in time-dependent simulations. Finally, these numerical schemes are applied to study the ac response of a (5,5) carbon nanotube within a three-dimensional real-space mesh framework.

  2. Plasmonic Optical Fiber Sensor Based on Double Step Growth of Gold Nano-Islands

    PubMed Central

    Vasconcelos, Helena

    2018-01-01

    It is presented the fabrication and characterization of optical fiber sensors for refractive index measurement based on localized surface plasmon resonance (LSPR) with gold nano-islands obtained by single and by repeated thermal dewetting of gold thin films. Thin films of gold deposited on silica (SiO2) substrates and produced by different experimental conditions were analyzed by Scanning Electron Microscope/Dispersive X-ray Spectroscopy (SEM/EDS) and optical means, allowing identifying and characterizing the formation of nano-islands. The wavelength shift sensitivity to the surrounding refractive index of sensors produced by single and by repeated dewetting is compared. While for the single step dewetting, a wavelength shift sensitivity of ~60 nm/RIU was calculated, for the repeated dewetting, a value of ~186 nm/RIU was obtained, an increase of more than three times. It is expected that through changing the fabrication parameters and using other fiber sensor geometries, higher sensitivities may be achieved, allowing, in addition, for the possibility of tuning the plasmonic frequency. PMID:29677108

  3. Plasmonic Optical Fiber Sensor Based on Double Step Growth of Gold Nano-Islands.

    PubMed

    de Almeida, José M M M; Vasconcelos, Helena; Jorge, Pedro A S; Coelho, Luis

    2018-04-20

    It is presented the fabrication and characterization of optical fiber sensors for refractive index measurement based on localized surface plasmon resonance (LSPR) with gold nano-islands obtained by single and by repeated thermal dewetting of gold thin films. Thin films of gold deposited on silica (SiO₂) substrates and produced by different experimental conditions were analyzed by Scanning Electron Microscope/Dispersive X-ray Spectroscopy (SEM/EDS) and optical means, allowing identifying and characterizing the formation of nano-islands. The wavelength shift sensitivity to the surrounding refractive index of sensors produced by single and by repeated dewetting is compared. While for the single step dewetting, a wavelength shift sensitivity of ~60 nm/RIU was calculated, for the repeated dewetting, a value of ~186 nm/RIU was obtained, an increase of more than three times. It is expected that through changing the fabrication parameters and using other fiber sensor geometries, higher sensitivities may be achieved, allowing, in addition, for the possibility of tuning the plasmonic frequency.

  4. A hybridized discontinuous Galerkin framework for high-order particle-mesh operator splitting of the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Maljaars, Jakob M.; Labeur, Robert Jan; Möller, Matthias

    2018-04-01

    A generic particle-mesh method using a hybridized discontinuous Galerkin (HDG) framework is presented and validated for the solution of the incompressible Navier-Stokes equations. Building upon particle-in-cell concepts, the method is formulated in terms of an operator splitting technique in which Lagrangian particles are used to discretize an advection operator, and an Eulerian mesh-based HDG method is employed for the constitutive modeling to account for the inter-particle interactions. Key to the method is the variational framework provided by the HDG method. This allows to formulate the projections between the Lagrangian particle space and the Eulerian finite element space in terms of local (i.e. cellwise) ℓ2-projections efficiently. Furthermore, exploiting the HDG framework for solving the constitutive equations results in velocity fields which excellently approach the incompressibility constraint in a local sense. By advecting the particles through these velocity fields, the particle distribution remains uniform over time, obviating the need for additional quality control. The presented methodology allows for a straightforward extension to arbitrary-order spatial accuracy on general meshes. A range of numerical examples shows that optimal convergence rates are obtained in space and, given the particular time stepping strategy, second-order accuracy is obtained in time. The model capabilities are further demonstrated by presenting results for the flow over a backward facing step and for the flow around a cylinder.

  5. A conjugate heat transfer procedure for gas turbine blades.

    PubMed

    Croce, G

    2001-05-01

    A conjugate heat transfer procedure, allowing for the use of different solvers on the solid and fluid domain(s), is presented. Information exchange between solid and fluid solution is limited to boundary condition values, and this exchange is carried out at any pseudo-time step. Global convergence rate of the procedure is, thus, of the same order of magnitude of stand-alone computations.

  6. Estimating time-varying exposure-outcome associations using case-control data: logistic and case-cohort analyses.

    PubMed

    Keogh, Ruth H; Mangtani, Punam; Rodrigues, Laura; Nguipdop Djomo, Patrick

    2016-01-05

    Traditional analyses of standard case-control studies using logistic regression do not allow estimation of time-varying associations between exposures and the outcome. We present two approaches which allow this. The motivation is a study of vaccine efficacy as a function of time since vaccination. Our first approach is to estimate time-varying exposure-outcome associations by fitting a series of logistic regressions within successive time periods, reusing controls across periods. Our second approach treats the case-control sample as a case-cohort study, with the controls forming the subcohort. In the case-cohort analysis, controls contribute information at all times they are at risk. Extensions allow left truncation, frequency matching and, using the case-cohort analysis, time-varying exposures. Simulations are used to investigate the methods. The simulation results show that both methods give correct estimates of time-varying effects of exposures using standard case-control data. Using the logistic approach there are efficiency gains by reusing controls over time and care should be taken over the definition of controls within time periods. However, using the case-cohort analysis there is no ambiguity over the definition of controls. The performance of the two analyses is very similar when controls are used most efficiently under the logistic approach. Using our methods, case-control studies can be used to estimate time-varying exposure-outcome associations where they may not previously have been considered. The case-cohort analysis has several advantages, including that it allows estimation of time-varying associations as a continuous function of time, while the logistic regression approach is restricted to assuming a step function form for the time-varying association.

  7. Terahertz Quantum Cascade Structures Using Step Wells And Longitudinal Optical-Phonon Scattering

    DTIC Science & Technology

    2009-06-01

    emit many photons, which allows for differential quantum efficiencies greater than unity and hence higher power output. QCLs have been successfully...maintained. The step in the well allows for high injection efficiency due to the spatial separation of the wavefunctions. A step quantum well, in which at...III.D.34), the photon density is determined to be ( )thiphotonphoton IILeAn − Γ = ητ (III.D.35) where the internal quantum efficiency

  8. Adhesives, fillers and potting compounds. Second progress report, December 1, 1967--April 1, 1968

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lichte, H.W.; Akst, I.B.

    1968-12-31

    Progress in the development program whose immediate purpose is to reduce set time of a silicone compound is described. Data are presented showing that a formulation of a current RTV silicone rubber with dibutyltin diacetate has a profitably lower set time than the same rubber in the present formulation which uses dibutyltin dilaurate, without increase in probability of either reversion or penalty to other weapons components. Time to set sufficiently to allow the next assembly step is 2 to 4 hours, compared to the 16 to 24 hours presently allowed or the 8 to 12 hours minimum attainable with themore » present formulation. The reduction is of the magnitude set as a goal, the attainment of which would increase production capacity enough to reduce the amount of new construction planned to accommodate weapons assembly programs.« less

  9. Solution of the hydrodynamic device model using high-order non-oscillatory shock capturing algorithms

    NASA Technical Reports Server (NTRS)

    Fatemi, Emad; Jerome, Joseph; Osher, Stanley

    1989-01-01

    A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.

  10. Label-Free Immuno-Sensors for the Fast Detection of Listeria in Food.

    PubMed

    Morlay, Alexandra; Roux, Agnès; Templier, Vincent; Piat, Félix; Roupioz, Yoann

    2017-01-01

    Foodborne diseases are a major concern for both food industry and health organizations due to the economic costs and potential threats for human lives. For these reasons, specific regulations impose the research of pathogenic bacteria in food products. Nevertheless, current methods, references and alternatives, take up to several days and require many handling steps. In order to improve pathogen detection in food, we developed an immune-sensor, based on Surface Plasmon Resonance imaging (SPRi) and bacterial growth which allows the detection of a very low number of Listeria monocytogenes in food sample in one day. Adequate sensitivity is achieved by the deposition of several antibodies in a micro-array format allowing real-time detection. This label-free method thus reduces handling and time to result compared with current methods.

  11. Integrating Thermodynamic Models in Geodynamic Simulations: The Example of the Community Software ASPECT

    NASA Astrophysics Data System (ADS)

    Dannberg, J.; Heister, T.; Grove, R. R.; Gassmoeller, R.; Spiegelman, M. W.; Bangerth, W.

    2017-12-01

    Earth's surface shows many features whose genesis can only be understood through the interplay of geodynamic and thermodynamic models. This is particularly important in the context of melt generation and transport: Mantle convection determines the distribution of temperature and chemical composition, the melting process itself is then controlled by the thermodynamic relations and in turn influences the properties and the transport of melt. Here, we present our extension of the community geodynamics code ASPECT, which solves the equations of coupled magma/mantle dynamics, and allows to integrate different parametrizations of reactions and phase transitions: They may alternatively be implemented as simple analytical expressions, look-up tables, or computed by a thermodynamics software. As ASPECT uses a variety of numerical methods and solvers, this also gives us the opportunity to compare different approaches of modelling the melting process. In particular, we will elaborate on the spatial and temporal resolution that is required to accurately model phase transitions, and show the potential of adaptive mesh refinement when applied to melt generation and transport. We will assess the advantages and disadvantages of iterating between fluid dynamics and chemical reactions derived from thermodynamic models within each time step, or decoupling them, allowing for different time step sizes. Beyond that, we will expand on the functionality required for an interface between computational thermodynamics and fluid dynamics models from the geodynamics side. Finally, using a simple example of melting of a two-phase, two-component system, we compare different time-stepping and solver schemes in terms of accuracy and efficiency, in dependence of the time scales of fluid flow and chemical reactions relative to each other. Our software provides a framework to integrate thermodynamic models in high resolution, 3d simulations of coupled magma/mantle dynamics, and can be used as a tool to study links between physical processes and geochemical signals in the Earth.

  12. Method and apparatus for combinatorial logic signal processor in a digitally based high speed x-ray spectrometer

    DOEpatents

    Warburton, W.K.

    1999-02-16

    A high speed, digitally based, signal processing system is disclosed which accepts a digitized input signal and detects the presence of step-like pulses in the this data stream, extracts filtered estimates of their amplitudes, inspects for pulse pileup, and records input pulse rates and system lifetime. The system has two parallel processing channels: a slow channel, which filters the data stream with a long time constant trapezoidal filter for good energy resolution; and a fast channel which filters the data stream with a short time constant trapezoidal filter, detects pulses, inspects for pileups, and captures peak values from the slow channel for good events. The presence of a simple digital interface allows the system to be easily integrated with a digital processor to produce accurate spectra at high count rates and allow all spectrometer functions to be fully automated. Because the method is digitally based, it allows pulses to be binned based on time related values, as well as on their amplitudes, if desired. 31 figs.

  13. MTS-MD of Biomolecules Steered with 3D-RISM-KH Mean Solvation Forces Accelerated with Generalized Solvation Force Extrapolation.

    PubMed

    Omelyan, Igor; Kovalenko, Andriy

    2015-04-14

    We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD with explicit solvent. We have been able to fold the miniprotein from a fully denatured, extended state in about 60 ns of quasidynamics steered with 3D-RISM-KH mean solvation forces, compared to the average physical folding time of 4-9 μs observed in experiment.

  14. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  15. Index extraction for electromagnetic field evaluation of high power wireless charging system

    PubMed Central

    2017-01-01

    This paper presents the precise dosimetry for highly resonant wireless power transfer (HR-WPT) system using an anatomically realistic human voxel model. The dosimetry for the HR-WPT system designed to operate at 13.56 MHz frequency, which one of the ISM band frequency band, is conducted in the various distances between the human model and the system, and in the condition of alignment and misalignment between transmitting and receiving circuits. The specific absorption rates in the human body are computed by the two-step approach; in the first step, the field generated by the HR-WPT system is calculated and in the second step the specific absorption rates are computed with the scattered field finite-difference time-domain method regarding the fields obtained in the first step as the incident fields. The safety compliance for non-uniform field exposure from the HR-WPT system is discussed with the international safety guidelines. Furthermore, the coupling factor concept is employed to relax the maximum allowable transmitting power. Coupling factors derived from the dosimetry results are presented. In this calculation, the external magnetic field from the HR-WPT system can be relaxed by approximately four times using coupling factor in the worst exposure scenario. PMID:28708840

  16. Large time-step stability of explicit one-dimensional advection schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.

  17. A new indicator framework for quantifying the intensity of the terrestrial water cycle

    NASA Astrophysics Data System (ADS)

    Huntington, Thomas G.; Weiskel, Peter K.; Wolock, David M.; McCabe, Gregory J.

    2018-04-01

    A quantitative framework for characterizing the intensity of the water cycle over land is presented, and illustrated using a spatially distributed water-balance model of the conterminous United States (CONUS). We approach water cycle intensity (WCI) from a landscape perspective; WCI is defined as the sum of precipitation (P) and actual evapotranspiration (AET) over a spatially explicit landscape unit of interest, averaged over a specified time period (step) of interest. The time step may be of any length for which data or simulation results are available (e.g., sub-daily to multi-decadal). We define the storage-adjusted runoff (Q‧) as the sum of actual runoff (Q) and the rate of change in soil moisture storage (ΔS/Δt, positive or negative) during the time step of interest. The Q‧ indicator is demonstrated to be mathematically complementary to WCI, in a manner that allows graphical interpretation of their relationship. For the purposes of this study, the indicators were demonstrated using long-term, spatially distributed model simulations with an annual time step. WCI was found to increase over most of the CONUS between the 1945 to 1974 and 1985 to 2014 periods, driven primarily by increases in P. In portions of the western and southeastern CONUS, Q‧ decreased because of decreases in Q and soil moisture storage. Analysis of WCI and Q‧ at temporal scales ranging from sub-daily to multi-decadal could improve understanding of the wide spectrum of hydrologic responses that have been attributed to water cycle intensification, as well as trends in those responses.

  18. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  19. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  20. Three-dimensional GaN/AlN nanowire heterostructures by separating nucleation and growth processes.

    PubMed

    Carnevale, Santino D; Yang, Jing; Phillips, Patrick J; Mills, Michael J; Myers, Roberto C

    2011-02-09

    Bottom-up nanostructure assembly has been a central theme of materials synthesis over the past few decades. Semiconductor quantum dots and nanowires provide additional degrees of freedom for charge confinement, strain engineering, and surface sensitivity-properties that are useful to a wide range of solid state optical and electronic technologies. A central challenge is to understand and manipulate nanostructure assembly to reproducibly generate emergent structures with the desired properties. However, progress is hampered due to the interdependence of nucleation and growth phenomena. Here we show that by dynamically adjusting the growth kinetics, it is possible to separate the nucleation and growth processes in spontaneously formed GaN nanowires using a two-step molecular beam epitaxy technique. First, a growth phase diagram for these nanowires is systematically developed, which allows for control of nanowire density over three orders of magnitude. Next, we show that by first nucleating nanowires at a low temperature and then growing them at a higher temperature, height and density can be independently selected while maintaining the target density over long growth times. GaN nanowires prepared using this two-step procedure are overgrown with three-dimensionally layered and topologically complex heterostructures of (GaN/AlN). By adjusting the growth temperature in the second growth step either vertical or coaxial nanowire superlattices can be formed. These results indicate that a two-step method allows access to a variety of kinetics at which nanowire nucleation and adatom mobility are adjustable.

  1. One-step formation and sterilization of gellan and hyaluronan nanohydrogels using autoclave.

    PubMed

    Montanari, Elita; De Rugeriis, Maria Cristina; Di Meo, Chiara; Censi, Roberta; Coviello, Tommasina; Alhaique, Franco; Matricardi, Pietro

    2015-01-01

    The sterilization of nanoparticles for biomedical applications is one of the challenges that must be faced in the development of nanoparticulate systems. Usually, autoclave sterilization cannot be applied because of stability concerns when polymeric nanoparticles are involved. This paper describes an innovative method which allows to obtain, using a single step autoclave procedure, the preparation and, at the same time, the sterilization of self-assembling nanohydrogels (NHs) obtained with cholesterol-derivatized gellan and hyaluronic acid. Moreover, by using this approach, NHs, while formed in the autoclave, can be easily loaded with drugs. The obtained NHs dispersion can be lyophilized in the presence of a cryoprotectant, leading to the original NHs after re-dispersion in water.

  2. Classifying seismic waveforms from scratch: a case study in the alpine environment

    NASA Astrophysics Data System (ADS)

    Hammer, C.; Ohrnberger, M.; Fäh, D.

    2013-01-01

    Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.

  3. Two-Step Incision for Periarterial Sympathectomy of the Hand.

    PubMed

    Jeon, Seung Bae; Ahn, Hee Chang; Ahn, Yong Su; Choi, Matthew Seung Suk

    2015-11-01

    Surgical scars on the palmar surface of the hand may lead to functional and also aesthetic and psychological consequences. The objective of this study was to introduce a new incision technique for periarterial sympathectomy of the hand and to compare the results of the new two-step incision technique with those of a Koman incision by using an objective questionnaire. A total of 40 patients (17 men and 23 women) with intractable Raynaud's disease or syndrome underwent surgery in our hospital, conducted by a single surgeon, between January 2008 and January 2013. Patients who had undergone extended sympathectomy or vessel graft were excluded. Clinical evaluation of postoperative scars was performed in both groups one year after surgery using the patient and observer scar assessment scale (POSAS) and the Wake Forest University rating scale. The total patient score was 8.59 (range, 6-15) in the two-step incision group and 9.62 (range, 7-18) in the Koman incision group. A significant difference was found between the groups in the total PS score (P-value=0.034) but not in the total observer score. Our analysis found no significant difference in preoperative and postoperative Wake Forest University rating scale scores between the two-step and Koman incision groups. The time required for recovery prior to returning to work after surgery was shorter in the two-step incision group, with a mean of 29.48 days in the two-step incision group and 34.15 days in the Koman incision group (P=0.03). Compared to the Koman incision, the new two-step incision technique provides better aesthetic results, similar symptom improvement, and a reduction in the recovery time required before returning to work. Furthermore, this incision allows the surgeon to access a wide surgical field and a sufficient exposure of anatomical structures.

  4. Comparison of steam sterilization conditions efficiency in the treatment of Infectious Health Care Waste.

    PubMed

    Maamari, Olivia; Mouaffak, Lara; Kamel, Ramza; Brandam, Cedric; Lteif, Roger; Salameh, Dominique

    2016-03-01

    Many studies show that the treatment of Infectious Health Care Waste (IHCW) in steam sterilization devices at usual operating standards does not allow for proper treatment of Infectious Health Care Waste (IHCW). Including a grinding component before sterilization allows better waste sterilization, but any hard metal object in the waste can damage the shredder. The first objective of the study is to verify that efficient IHCW treatment can occur at standard operating parameters defined by the contact time-temperature couple in steam treatment systems without a pre-mixing/fragmenting or pre-shredding step. The second objective is to establish scientifically whether the standard operation conditions for a steam treatment system including a step of pre-mixing/fragmenting were sufficient to destroy the bacterial spores in IHCW known to be the most difficult to treat. Results show that for efficient sterilization of dialysis cartridges in a pilot 60L steam treatment system, the process would require more than 20 min at 144°C without a pre-mixing/fragmenting step. In a 720L steam treatment system including pre-mixing/fragmenting paddles, only 10 min at 144°C are required to sterilize IHCW proved to be sterilization challenges such as dialysis cartridges and diapers in normal conditions of rolling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. The iterative thermal emission method: A more implicit modification of IMC

    NASA Astrophysics Data System (ADS)

    Long, A. R.; Gentile, N. A.; Palmer, T. S.

    2014-11-01

    For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of ;pseudo-scattering; introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in that they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem. We have developed a variant of IMC called iterative thermal emission IMC, which is designed to have a reduced parameter space in which the maximum principle is violated. ITE IMC is a more implicit version of IMC in that it uses the information obtained from a series of IMC photon histories to improve the estimate for the end of time step material temperature during a time step. A better estimate of the end of time step material temperature allows for a more implicit estimate of other temperature-dependent quantities: opacity, heat capacity, Fleck factor (probability that a photon absorbed during a time step is not reemitted) and the Planckian emission source. We have verified the ITE IMC method against 0-D and 1-D analytic solutions and problems from the literature. These results are compared with traditional IMC. We perform an infinite medium stability analysis of ITE IMC and show that it is slightly more numerically stable than traditional IMC. We find that significantly larger time steps can be used with ITE IMC without violating the maximum principle, especially in problems with non-linear material properties. The ITE IMC method does however yield solutions with larger variance because each sub-step uses a different Fleck factor (even at equilibrium).

  6. The iterative thermal emission method: A more implicit modification of IMC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, A.R., E-mail: arlong.ne@tamu.edu; Gentile, N.A.; Palmer, T.S.

    2014-11-15

    For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of “pseudo-scattering” introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in thatmore » they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem. We have developed a variant of IMC called iterative thermal emission IMC, which is designed to have a reduced parameter space in which the maximum principle is violated. ITE IMC is a more implicit version of IMC in that it uses the information obtained from a series of IMC photon histories to improve the estimate for the end of time step material temperature during a time step. A better estimate of the end of time step material temperature allows for a more implicit estimate of other temperature-dependent quantities: opacity, heat capacity, Fleck factor (probability that a photon absorbed during a time step is not reemitted) and the Planckian emission source. We have verified the ITE IMC method against 0-D and 1-D analytic solutions and problems from the literature. These results are compared with traditional IMC. We perform an infinite medium stability analysis of ITE IMC and show that it is slightly more numerically stable than traditional IMC. We find that significantly larger time steps can be used with ITE IMC without violating the maximum principle, especially in problems with non-linear material properties. The ITE IMC method does however yield solutions with larger variance because each sub-step uses a different Fleck factor (even at equilibrium)« less

  7. On coupling fluid plasma and kinetic neutral physics models

    DOE PAGES

    Joseph, I.; Rensink, M. E.; Stotler, D. P.; ...

    2017-03-01

    The coupled fluid plasma and kinetic neutral physics equations are analyzed through theory and simulation of benchmark cases. It is shown that coupling methods that do not treat the coupling rates implicitly are restricted to short time steps for stability. Fast charge exchange, ionization and recombination coupling rates exist, even after constraining the solution by requiring that the neutrals are at equilibrium. For explicit coupling, the present implementation of Monte Carlo correlated sampling techniques does not allow for complete convergence in slab geometry. For the benchmark case, residuals decay with particle number and increase with grid size, indicating that theymore » scale in a manner that is similar to the theoretical prediction for nonlinear bias error. Progress is reported on implementation of a fully implicit Jacobian-free Newton–Krylov coupling scheme. The present block Jacobi preconditioning method is still sensitive to time step and methods that better precondition the coupled system are under investigation.« less

  8. Unsteady Analysis of Separated Aerodynamic Flows Using an Unstructured Multigrid Algorithm

    NASA Technical Reports Server (NTRS)

    Pelaez, Juan; Mavriplis, Dimitri J.; Kandil, Osama

    2001-01-01

    An implicit method for the computation of unsteady flows on unstructured grids is presented. The resulting nonlinear system of equations is solved at each time step using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Validation of the code using a one-equation turbulence model is performed for the well-known case of flow over a cylinder. A Detached Eddy Simulation model is also implemented and its performance compared to the one equation Spalart-Allmaras Reynolds Averaged Navier-Stokes (RANS) turbulence model. Validation cases using DES and RANS include flow over a sphere and flow over a NACA 0012 wing including massive stall regimes. The project was driven by the ultimate goal of computing separated flows of aerodynamic interest, such as massive stall or flows over complex non-streamlined geometries.

  9. Highly stereoselective construction of the C2 stereocentre of α-tocopherol (vitamin E) by asymmetric addition of Grignard reagents to ketones.

    PubMed

    Bieszczad, Bartosz; Gilheany, Declan G

    2017-08-09

    Tertiary alcohol precursors of both C2 diastereoisomers of α-tocopherol were prepared in three ways by our recently reported asymmetric Grignard synthesis. The versatility of Grignard chemistry inherent in its three-way disconnection was exploited to allow the synthesis of three product grades: 77 : 23 dr (5 steps), 81 : 19 dr (5 steps) and 96 : 4 dr (7 steps, one gram scale) from readily available and abundant starting materials. The products were converted to their respective α-tocopherols in 3 steps, which allowed a definitive re-assignment of their absolute configurations.

  10. Spatial Data Integration Using Ontology-Based Approach

    NASA Astrophysics Data System (ADS)

    Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.

    2015-12-01

    In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  11. DNA strand displacement system running logic programs.

    PubMed

    Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr

    2014-01-01

    The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Two-step solar filament eruptions

    NASA Astrophysics Data System (ADS)

    Filippov, B.

    2018-04-01

    Coronal mass ejections (CMEs) are closely related to eruptive filaments and usually are the continuation of the same eruptive process into the upper corona. There are failed filament eruptions when a filament decelerates and stops at some greater height in the corona. Sometimes the filament after several hours starts to rise again and develops into the successful eruption with a CME formation. We propose a simple model for the interpretation of such two-step eruptions in terms of equilibrium of a flux rope in a two-scale ambient magnetic field. The eruption is caused by a slow decrease of the holding magnetic field. The presence of two critical heights for the initiation of the flux-rope vertical instability allows the flux rope to stay after the first jump some time in a metastable equilibrium near the second critical height. If the decrease of the ambient field continues, the next eruption step follows.

  13. Design of Magnetic Gelatine/Silica Nanocomposites by Nanoemulsification: Encapsulation versus in Situ Growth of Iron Oxide Colloids

    PubMed Central

    Allouche, Joachim; Chanéac, Corinne; Brayner, Roberta; Boissière, Michel; Coradin, Thibaud

    2014-01-01

    The design of magnetic nanoparticles by incorporation of iron oxide colloids within gelatine/silica hybrid nanoparticles has been performed for the first time through a nanoemulsion route using the encapsulation of pre-formed magnetite nanocrystals and the in situ precipitation of ferrous/ferric ions. The first method leads to bi-continuous hybrid nanocomposites containing a limited amount of well-dispersed magnetite colloids. In contrast, the second approach allows the formation of gelatine-silica core-shell nanostructures incorporating larger amounts of agglomerated iron oxide colloids. Both magnetic nanocomposites exhibit similar superparamagnetic behaviors. Whereas nanocomposites obtained via an in situ approach show a strong tendency to aggregate in solution, the encapsulation route allows further surface modification of the magnetic nanocomposites, leading to quaternary gold/iron oxide/silica/gelatine nanoparticles. Hence, such a first-time rational combination of nano-emulsion, nanocrystallization and sol-gel chemistry allows the elaboration of multi-component functional nanomaterials. This constitutes a step forward in the design of more complex bio-nanoplatforms. PMID:28344239

  14. Design of Magnetic Gelatine/Silica Nanocomposites by Nanoemulsification: Encapsulation versus in Situ Growth of Iron Oxide Colloids.

    PubMed

    Allouche, Joachim; Chanéac, Corinne; Brayner, Roberta; Boissière, Michel; Coradin, Thibaud

    2014-07-31

    The design of magnetic nanoparticles by incorporation of iron oxide colloids within gelatine/silica hybrid nanoparticles has been performed for the first time through a nanoemulsion route using the encapsulation of pre-formed magnetite nanocrystals and the in situ precipitation of ferrous/ferric ions. The first method leads to bi-continuous hybrid nanocomposites containing a limited amount of well-dispersed magnetite colloids. In contrast, the second approach allows the formation of gelatine-silica core-shell nanostructures incorporating larger amounts of agglomerated iron oxide colloids. Both magnetic nanocomposites exhibit similar superparamagnetic behaviors. Whereas nanocomposites obtained via an in situ approach show a strong tendency to aggregate in solution, the encapsulation route allows further surface modification of the magnetic nanocomposites, leading to quaternary gold/iron oxide/silica/gelatine nanoparticles. Hence, such a first-time rational combination of nano-emulsion, nanocrystallization and sol-gel chemistry allows the elaboration of multi-component functional nanomaterials. This constitutes a step forward in the design of more complex bio-nanoplatforms.

  15. Modernizing an ambulatory care pharmacy in a large multi-clinic institution.

    PubMed

    Miller, R F; Herrick, J D

    1979-03-01

    The steps involved in modernizing an outdated outpatient pharmacy, including the functional planning process, development of a work-flow pattern which makes the patient an integral part of the system, budget considerations and evaluation of the new pharmacy, are described. Objectives of the modernization were to: (1) provide a facility conductive to efficient and high quality services to the ambulatory patient; (2) provide an attractive and comfortable area for patients and staff; (3) provide a work flow which keeps the patient in the system and allows the pharmacist time for instruction and patient education; and (4) establish a patient medication record system. After one year of operation, average overall prescription volume increased by 50%, while average waiting time declined by 74%. Facility and procedural changes allowed the pharmacist to substantially increase patient counseling activity. The application of functional planning and facility design to the renovation and restructuring of an outpatient pharmacy allowed pharmacists to provide efficient, patient-oriented service.

  16. Adaptive temporal refinement in injection molding

    NASA Astrophysics Data System (ADS)

    Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek

    2018-05-01

    Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.

  17. Open-Universe Theory for Bayesian Inference, Decision, and Sensing (OUTBIDS)

    DTIC Science & Technology

    2014-01-01

    using a novel dynamic programming algorithm [6]. The second allows for tensor data, in which observations at a given time step exhibit...unlimited. 5 We developed a dynamical tensor model that gives far better estimation and system- identification results than the standard vectorization...inference. Third, unlike prior work that learns different pieces of the model independently, use matching between 3D models and 2D views and/or voting

  18. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than

  19. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  20. Transcranial ultrasonic therapy based on time reversal of acoustically induced cavitation bubble signature

    PubMed Central

    Gâteau, Jérôme; Marsac, Laurent; Pernot, Mathieu; Aubry, Jean-Francois; Tanter, Mickaël; Fink, Mathias

    2010-01-01

    Brain treatment through the skull with High Intensity Focused Ultrasound (HIFU) can be achieved with multichannel arrays and adaptive focusing techniques such as time-reversal. This method requires a reference signal to be either emitted by a real source embedded in brain tissues or computed from a virtual source, using the acoustic properties of the skull derived from CT images. This non-invasive computational method focuses with precision, but suffers from modeling and repositioning errors that reduce the accessible acoustic pressure at the focus in comparison with fully experimental time-reversal using an implanted hydrophone. In this paper, this simulation-based targeting has been used experimentally as a first step for focusing through an ex vivo human skull at a single location. It has enabled the creation of a cavitation bubble at focus that spontaneously emitted an ultrasonic wave received by the array. This active source signal has allowed 97%±1.1% of the reference pressure (hydrophone-based) to be restored at the geometrical focus. To target points around the focus with an optimal pressure level, conventional electronic steering from the initial focus has been combined with bubble generation. Thanks to step by step bubble generation, the electronic steering capabilities of the array through the skull were improved. PMID:19770084

  1. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    DOE PAGES

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; ...

    2017-02-03

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associatedmore » with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step  convergence is applicable.« less

  2. Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings

    NASA Astrophysics Data System (ADS)

    Bruns, Tim M.; Wagenaar, Joost B.; Bauman, Matthew J.; Gaunt, Robert A.; Weber, Douglas J.

    2013-04-01

    Objective. Functional electrical stimulation (FES) approaches often utilize an open-loop controller to drive state transitions. The addition of sensory feedback may allow for closed-loop control that can respond effectively to perturbations and muscle fatigue. Approach. We evaluated the use of natural sensory nerve signals obtained with penetrating microelectrode arrays in lumbar dorsal root ganglia (DRG) as real-time feedback for closed-loop control of FES-generated hind limb stepping in anesthetized cats. Main results. Leg position feedback was obtained in near real-time at 50 ms intervals by decoding the firing rates of more than 120 DRG neurons recorded simultaneously. Over 5 m of effective linear distance was traversed during closed-loop stepping trials in each of two cats. The controller compensated effectively for perturbations in the stepping path when DRG sensory feedback was provided. The presence of stimulation artifacts and the quality of DRG unit sorting did not significantly affect the accuracy of leg position feedback obtained from the linear decoding model as long as at least 20 DRG units were included in the model. Significance. This work demonstrates the feasibility and utility of closed-loop FES control based on natural neural sensors. Further work is needed to improve the controller and electrode technologies and to evaluate long-term viability.

  3. Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings

    PubMed Central

    Bruns, Tim M; Wagenaar, Joost B; Bauman, Matthew J; Gaunt, Robert A; Weber, Douglas J

    2013-01-01

    Objective Functional electrical stimulation (FES) approaches often utilize an open-loop controller to drive state transitions. The addition of sensory feedback may allow for closed-loop control that can respond effectively to perturbations and muscle fatigue. Approach We evaluated the use of natural sensory nerve signals obtained with penetrating microelectrode arrays in lumbar dorsal root ganglia (DRG) as real-time feedback for closed-loop control of FES-generated hind limb stepping in anesthetized cats. Main results Leg position feedback was obtained in near real-time at 50 ms intervals by decoding the firing rates of more than 120 DRG neurons recorded simultaneously. Over 5 m of effective linear distance was traversed during closed-loop stepping trials in each of two cats. The controller compensated effectively for perturbations in the stepping path when DRG sensory feedback was provided. The presence of stimulation artifacts and the quality of DRG unit sorting did not significantly affect the accuracy of leg position feedback obtained from the linear decoding model as long as at least 20 DRG units were included in the model. Significance This work demonstrates the feasibility and utility of closed-loop FES control based on natural neural sensors. Further work is needed to improve the controller and electrode technologies and to evaluate long-term viability. PMID:23503062

  4. Multiphysics modelling of the separation of suspended particles via frequency ramping of ultrasonic standing waves.

    PubMed

    Trujillo, Francisco J; Eberhardt, Sebastian; Möller, Dirk; Dual, Jurg; Knoerzer, Kai

    2013-03-01

    A model was developed to determine the local changes of concentration of particles and the formations of bands induced by a standing acoustic wave field subjected to a sawtooth frequency ramping pattern. The mass transport equation was modified to incorporate the effect of acoustic forces on the concentration of particles. This was achieved by balancing the forces acting on particles. The frequency ramping was implemented as a parametric sweep for the time harmonic frequency response in time steps of 0.1s. The physics phenomena of piezoelectricity, acoustic fields and diffusion of particles were coupled and solved in COMSOL Multiphysics™ (COMSOL AB, Stockholm, Sweden) following a three step approach. The first step solves the governing partial differential equations describing the acoustic field by assuming that the pressure field achieves a pseudo steady state. In the second step, the acoustic radiation force is calculated from the pressure field. The final step allows calculating the locally changing concentration of particles as a function of time by solving the modified equation of particle transport. The diffusivity was calculated as function of concentration following the Garg and Ruthven equation which describes the steep increase of diffusivity when the concentration approaches saturation. However, it was found that this steep increase creates numerical instabilities at high voltages (in the piezoelectricity equations) and high initial particle concentration. The model was simplified to a pseudo one-dimensional case due to computation power limitations. The predicted particle distribution calculated with the model is in good agreement with the experimental data as it follows accurately the movement of the bands in the centre of the chamber. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  5. Modeling Gross Primary Production of Agro-Forestry Ecosystems by Assimilation of Satellite-Derived Information in a Process-Based Model

    PubMed Central

    Migliavacca, Mirco; Meroni, Michele; Busetto, Lorenzo; Colombo, Roberto; Zenone, Terenzio; Matteucci, Giorgio; Manca, Giovanni; Seufert, Guenther

    2009-01-01

    In this paper we present results obtained in the framework of a regional-scale analysis of the carbon budget of poplar plantations in Northern Italy. We explored the ability of the process-based model BIOME-BGC to estimate the gross primary production (GPP) using an inverse modeling approach exploiting eddy covariance and satellite data. We firstly present a version of BIOME-BGC coupled with the radiative transfer models PROSPECT and SAILH (named PROSAILH-BGC) with the aims of i) improving the BIOME-BGC description of the radiative transfer regime within the canopy and ii) allowing the assimilation of remotely-sensed vegetation index time series, such as MODIS NDVI, into the model. Secondly, we present a two-step model inversion for optimization of model parameters. In the first step, some key ecophysiological parameters were optimized against data collected by an eddy covariance flux tower. In the second step, important information about phenological dates and about standing biomass were optimized against MODIS NDVI. Results obtained showed that the PROSAILH-BGC allowed simulation of MODIS NDVI with good accuracy and that we described better the canopy radiation regime. The inverse modeling approach was demonstrated to be useful for the optimization of ecophysiological model parameters, phenological dates and parameters related to the standing biomass, allowing good accuracy of daily and annual GPP predictions. In summary, this study showed that assimilation of eddy covariance and remote sensing data in a process model may provide important information for modeling gross primary production at regional scale. PMID:22399948

  6. Modeling gross primary production of agro-forestry ecosystems by assimilation of satellite-derived information in a process-based model.

    PubMed

    Migliavacca, Mirco; Meroni, Michele; Busetto, Lorenzo; Colombo, Roberto; Zenone, Terenzio; Matteucci, Giorgio; Manca, Giovanni; Seufert, Guenther

    2009-01-01

    In this paper we present results obtained in the framework of a regional-scale analysis of the carbon budget of poplar plantations in Northern Italy. We explored the ability of the process-based model BIOME-BGC to estimate the gross primary production (GPP) using an inverse modeling approach exploiting eddy covariance and satellite data. We firstly present a version of BIOME-BGC coupled with the radiative transfer models PROSPECT and SAILH (named PROSAILH-BGC) with the aims of i) improving the BIOME-BGC description of the radiative transfer regime within the canopy and ii) allowing the assimilation of remotely-sensed vegetation index time series, such as MODIS NDVI, into the model. Secondly, we present a two-step model inversion for optimization of model parameters. In the first step, some key ecophysiological parameters were optimized against data collected by an eddy covariance flux tower. In the second step, important information about phenological dates and about standing biomass were optimized against MODIS NDVI. Results obtained showed that the PROSAILH-BGC allowed simulation of MODIS NDVI with good accuracy and that we described better the canopy radiation regime. The inverse modeling approach was demonstrated to be useful for the optimization of ecophysiological model parameters, phenological dates and parameters related to the standing biomass, allowing good accuracy of daily and annual GPP predictions. In summary, this study showed that assimilation of eddy covariance and remote sensing data in a process model may provide important information for modeling gross primary production at regional scale.

  7. Latent trajectory studies: the basics, how to interpret the results, and what to report.

    PubMed

    van de Schoot, Rens

    2015-01-01

    In statistics, tools have been developed to estimate individual change over time. Also, the existence of latent trajectories, where individuals are captured by trajectories that are unobserved (latent), can be evaluated (Muthén & Muthén, 2000). The method used to evaluate such trajectories is called Latent Growth Mixture Modeling (LGMM) or Latent Class Growth Modeling (LCGA). The difference between the two models is whether variance within latent classes is allowed for (Jung & Wickrama, 2008). The default approach most often used when estimating such models begins with estimating a single cluster model, where only a single underlying group is presumed. Next, several additional models are estimated with an increasing number of clusters (latent groups or classes). For each of these models, the software is allowed to estimate all parameters without any restrictions. A final model is chosen based on model comparison tools, for example, using the BIC, the bootstrapped chi-square test, or the Lo-Mendell-Rubin test. To ease the use of LGMM/LCGA step by step in this symposium (Van de Schoot, 2015) guidelines are presented which can be used for researchers applying the methods to longitudinal data, for example, the development of posttraumatic stress disorder (PTSD) after trauma (Depaoli, van de Schoot, van Loey, & Sijbrandij, 2015; Galatzer-Levy, 2015). The guidelines include how to use the software Mplus (Muthén & Muthén, 1998-2012) to run the set of models needed to answer the research question: how many latent classes exist in the data? The next step described in the guidelines is how to add covariates/predictors to predict class membership using the three-step approach (Vermunt, 2010). Lastly, it described what essentials to report in the paper. When applying LGMM/LCGA models for the first time, the guidelines presented can be used to guide what models to run and what to report.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocchetti, Laura; Amato, Alessia; Fonti, Viviana

    Graphical abstract: Display Omitted - Highlights: • End-of-life LCD panels represent a source of indium. • Several experimental conditions for indium leaching have been assessed. • Indium is completely extracted with 2 M sulfuric acid at 80 °C for 10 min. • Cross-current leaching improves indium extraction and operating costs are lowered. • Benefits to the environment come from reduction of CO{sub 2} emissions and reagents use. - Abstract: Indium is a critical element mainly produced as a by-product of zinc mining, and it is largely used in the production process of liquid crystal display (LCD) panels. End-of-life LCDs representmore » a possible source of indium in the field of urban mining. In the present paper, we apply, for the first time, cross-current leaching to mobilize indium from end-of-life LCD panels. We carried out a series of treatments to leach indium. The best leaching conditions for indium were 2 M sulfuric acid at 80 °C for 10 min, which allowed us to completely mobilize indium. Taking into account the low content of indium in end-of-life LCDs, of about 100 ppm, a single step of leaching is not cost-effective. We tested 6 steps of cross-current leaching: in the first step indium leaching was complete, whereas in the second step it was in the range of 85–90%, and with 6 steps it was about 50–55%. Indium concentration in the leachate was about 35 mg/L after the first step of leaching, almost 2-fold at the second step and about 3-fold at the fifth step. Then, we hypothesized to scale up the process of cross-current leaching up to 10 steps, followed by cementation with zinc to recover indium. In this simulation, the process of indium recovery was advantageous from an economic and environmental point of view. Indeed, cross-current leaching allowed to concentrate indium, save reagents, and reduce the emission of CO{sub 2} (with 10 steps we assessed that the emission of about 90 kg CO{sub 2}-Eq. could be avoided) thanks to the recovery of indium. This new strategy represents a useful approach for secondary production of indium from waste LCD panels.« less

  9. Time-gated real-time pump-probe imaging spectroscopy

    NASA Astrophysics Data System (ADS)

    Ferrari, Raffaele; D'Andrea, Cosimo; Bassi, Andrea; Valentini, Gianluca; Cubeddu, Rinaldo

    2007-07-01

    An experimental technique which allows one to perform pump-probe transient absorption spectroscopy in real-time is an important tool to study irreversible processes. This is particularly interesting in the case of biological samples which easily deteriorate upon exposure to light pulses, with the formation of permanent photoproducts and structural changes. In particular pump-probe spectroscopy can provide fundamental information for the design of optical chromophores. In this work a real-time pump-probe imaging spectroscopy system has been realized and we have explored the possibility to further reduce the number of laser pulses by using a time-gated camera. We believe that the use of a time-gated camera can provide an important step towards the final goal of pump-probe single shot spectroscopy.

  10. 3D transient electromagnetic simulation using a modified correspondence principle for wave and diffusion fields

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Ji, Y.; Egbert, G. D.

    2015-12-01

    The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.

  11. Stair ascent with an innovative microprocessor-controlled exoprosthetic knee joint.

    PubMed

    Bellmann, Malte; Schmalz, Thomas; Ludwigs, Eva; Blumentritt, Siegmar

    2012-12-01

    Climbing stairs can pose a major challenge for above-knee amputees as a result of compromised motor performance and limitations to prosthetic design. A new, innovative microprocessor-controlled prosthetic knee joint, the Genium, incorporates a function that allows an above-knee amputee to climb stairs step over step. To execute this function, a number of different sensors and complex switching algorithms were integrated into the prosthetic knee joint. The function is intuitive for the user. A biomechanical study was conducted to assess objective gait measurements and calculate joint kinematics and kinetics as subjects ascended stairs. Results demonstrated that climbing stairs step over step is more biomechanically efficient for an amputee using the Genium prosthetic knee than the previously possible conventional method where the extended prosthesis is trailed as the amputee executes one or two steps at a time. There is a natural amount of stress on the residual musculoskeletal system, and it has been shown that the healthy contralateral side supports the movements of the amputated side. The mechanical power that the healthy contralateral knee joint needs to generate during the extension phase is also reduced. Similarly, there is near normal loading of the hip joint on the amputated side.

  12. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    PubMed

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  13. Exactly energy conserving semi-implicit particle in cell formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be

    We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less

  14. Programmable 10 MHz optical fiducial system for hydrodiagnostic cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huen, T.

    1987-07-01

    A solid state light control system was designed and fabricated for use with hydrodiagnostic streak cameras of the electro-optic type. With its use, the film containing the streak images will have on it two time scales simultaneously exposed with the signal. This allows timing and cross timing. The latter is achieved with exposure modulation marking onto the time tick marks. The purpose of using two time scales will be discussed. The design is based on a microcomputer, resulting in a compact and easy to use instrument. The light source is a small red light emitting diode. Time marking can bemore » programmed in steps of 0.1 microseconds, with a range of 255 steps. The time accuracy is based on a precision 100 MHz quartz crystal, giving a divided down 10 MHz system frequency. The light is guided by two small 100 micron diameter optical fibers, which facilitates light coupling onto the input slit of an electro-optic streak camera. Three distinct groups of exposure modulation of the time tick marks can be independently set anywhere onto the streak duration. This system has been successfully used in Fabry-Perot laser velocimeters for over four years in our Laboratory. The microcomputer control section is also being used in providing optical fids to mechanical rotor cameras.« less

  15. Head movement compensation in real-time magnetoencephalographic recordings.

    PubMed

    Little, Graham; Boe, Shaun; Bardouille, Timothy

    2014-01-01

    Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.

  16. Simple Backdoors on RSA Modulus by Using RSA Vulnerability

    NASA Astrophysics Data System (ADS)

    Sun, Hung-Min; Wu, Mu-En; Yang, Cheng-Ta

    This investigation proposes two methods for embedding backdoors in the RSA modulus N=pq rather than in the public exponent e. This strategy not only permits manufacturers to embed backdoors in an RSA system, but also allows users to choose any desired public exponent, such as e=216+1, to ensure efficient encryption. This work utilizes lattice attack and exhaustive attack to embed backdoors in two proposed methods, called RSASBLT and RSASBES, respectively. Both approaches involve straightforward steps, making their running time roughly the same as that of normal RSA key-generation time, implying that no one can detect the backdoor by observing time imparity.

  17. Modifying a standard method allows simultaneous extraction of RNA and protein, enabling detection of enzymes in the rat retina with low expressions and protein levels.

    PubMed

    Agardh, Elisabet; Gustavsson, Carin; Hagert, Per; Nilsson, Marie; Agardh, Carl-David

    2006-02-01

    The aim of the study was to evaluate messenger RNA and protein expression in limited amounts of tissue with low protein content. The Chomczynski method was used for simultaneous extraction of RNA, and protein was modified in the protein isolation step. Template mass and cycling time for the complementary DNA synthesis step of real-time reverse transcription-polymerase chain reaction (RT-PCR) for analysis of catalase, copper/zinc superoxide dismutase, manganese superoxide dismutase, the catalytic subunit of glutamylcysteine ligase, glutathione peroxidase 1, and the endogenous control cyclophilin B (CypB) were optimized before PCR. Polymerase chain reaction accuracy and efficacy were demonstrated by calculating the regression (R2) values of the separate amplification curves. Appropriate antibodies, blocking buffers, and running conditions were established for Western blot, and protein detection and multiplex assays with CypB were performed for each target. During the extraction procedure, the protein phase was dissolved in a modified washing buffer containing 0.1% sodium dodecyl sulfate, followed by ultrafiltration. Enzyme expression on real-time RT-PCR was accomplished with high reliability and reproducibility (R2, 0.990-0.999), and all enzymes except for glutathione peroxidase 1 were detectable in individual retinas on Western blot. Western blot multiplexing with CypB was possible for all targets. In conclusion, connecting gene expression directly to protein levels in the individual rat retina was possible by simultaneous extraction of RNA and protein. Real-time RT-PCR and Western blot allowed accurate detection of retinal protein expressions and levels.

  18. Prototyping a new, high-temperature SQUID magnetometer system

    NASA Astrophysics Data System (ADS)

    Grappone, J. Michael; Shaw, John; Biggin, Andrew J.

    2017-04-01

    High-sensitivity Superconducting Quantum Inference Devices (SQUIDs) and μ-metal shielding have largely solved paleomagnetic noise problems. Combing the two allows successful measurements of previously unusable samples, generally sediments with very weak (<10 pAm2) magnetizations. The improved sensitivity increases the fidelity of magnetic field variation surveys, but surveys continue to be somewhat slow. SQUIDs have historically been expensive to buy and operate, but technological advances now allow them to operate at liquid nitrogen temperatures (77 K), drastically reducing their costs. Step-wise thermal paleomagnetics studies cause large lag times during later steps as a result of heating from and cooling to room temperature for measurements. If the cooling step is removed entirely, however, the lag time drops by at least half. Available magnetometers currently provide either SQUID-level (0.1 - 1 pAm2) sensitivity or continuous heating. Combining a SQUID magnetometer with a high temperature oven is the logical next step to uncover the mysteries of the paleofield. However, the few that currently offer high temperature capabilities with noise levels approaching 10 pAm2 require either spinning or vibrating the sample, necessitating additional handling and potentially causing damage to the sample. Two primary factors have plagued previous developments: noise levels and temperature gradients. Our entire system is shielded from the environment using 4 layers of μ-metal. Our sample oven (designed for 7 mm diameter samples) sits inside a copper pipe and operates at high-frequency AC voltages. High frequency (10 kHz) AC current reduces the skin depth of radio frequency (RF) electromagnetic noise, which allows the 2 mm-thick copper shielding to reduce RF noise by ˜94%, leaving a residual field of ˜1.5 nT at the SQUID's location, 14.9 mm from the oven. A computer-controlled Eurotherm 3216 thermal controller regulates the temperature within ± 0.5 ˚ C. To reach 700 ˚ C, just above the Curie temperature of Hematite, a temperature difference of nearly 900 ˚ C between the sample and the SQUID is required. Since dipole fields decay rapidly with distance (∝ r -3 ), the equipment is designed to handle temperature gradients above 500 ˚ C cm-1 for maximum sensitivity using a passive double-vacuum separation system. All the parts used are commercially available to help reduce the operating costs and increase versatility.

  19. Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests

    NASA Astrophysics Data System (ADS)

    Toth, G.; Keppens, R.; Botchev, M. A.

    1998-04-01

    We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.

  20. Electrohydraulic linear actuator with two stepping motors controlled by overshoot-free algorithm

    NASA Astrophysics Data System (ADS)

    Milecki, Andrzej; Ortmann, Jarosław

    2017-11-01

    The paper describes electrohydraulic spool valves with stepping motors used as electromechanical transducers. A new concept of a proportional valve in which two stepping motors are working differentially is introduced. Such valve changes the fluid flow proportionally to the sum or difference of the motors' steps numbers. The valve design and principle of its operation is described. Theoretical equations and simulation models are proposed for all elements of the drive, i.e., the stepping motor units, hydraulic valve and cylinder. The main features of the valve and drive operation are described; some specific problem areas covering the nature of stepping motors and their differential work in the valve are also considered. The whole servo drive non-linear model is proposed and used further for simulation investigations. The initial simulation investigations of the drive with a new valve have shown that there is a significant overshoot in the drive step response, which is not allowed in positioning process. Therefore additional effort is spent to reduce the overshoot and in consequence reduce the settling time. A special predictive algorithm is proposed to this end. Then the proposed control method is tested and further improved in simulations. Further on, the model is implemented in reality and the whole servo drive system is tested. The investigation results presented in this paper, are showing an overshoot-free positioning process which enables high positioning accuracy.

  1. Time resolved infrared studies of C-H bond activation by organometallics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asplund, M.C.

    This work describes how step-scan Fourier Transform Infrared spectroscopy and visible and near infrared ultrafast lasers have been applied to the study of the photochemical activation of C-H bonds in organometallic systems, which allow for the selective breaking of C-H bonds in alkanes. The author has established the photochemical mechanism of C-H activation by Tp{sup *}Rh(CO){sub 2}(Tp{sup *} = HB-Pz{sup *}{sub 3}, Pz = 3,5-dimethylpyrazolyl) in alkane solution. The initially formed monocarbonyl forms a weak solvent complex, which undergoes a change in Tp{sup *} ligand connectivity. The final C-H bond breaking step occurs at different time scales depending on themore » structure of the alkane. In linear solvents, the time scale is <50 ns and cyclic alkanes is {approximately}200 ps. The reactivity of the Tp{sup *}Rh(CO){sub 2} system has also been studied in aromatic solvents. Here the reaction proceeds through two different pathways, with very different time scales. The first proceeds in a manner analogous to alkanes and takes <50 ns. The second proceeds through a Rh-C-C complex, and takes place on a time scale of 1.8 {micro}s.« less

  2. Automation in the Teaching of Descriptive Geometry and CAD. High-Level CAD Templates Using Script Languages

    NASA Astrophysics Data System (ADS)

    Moreno, R.; Bazán, A. M.

    2017-10-01

    The main purpose of this work is to study improvements to the learning method of technical drawing and descriptive geometry through exercises with traditional techniques that are usually solved manually by applying automated processes assisted by high-level CAD templates (HLCts). Given that an exercise with traditional procedures can be solved, detailed step by step in technical drawing and descriptive geometry manuals, CAD applications allow us to do the same and generalize it later, incorporating references. Traditional teachings have become obsolete and current curricula have been relegated. However, they can be applied in certain automation processes. The use of geometric references (using variables in script languages) and their incorporation into HLCts allows the automation of drawing processes. Instead of repeatedly creating similar exercises or modifying data in the same exercises, users should be able to use HLCts to generate future modifications of these exercises. This paper introduces the automation process when generating exercises based on CAD script files, aided by parametric geometry calculation tools. The proposed method allows us to design new exercises without user intervention. The integration of CAD, mathematics, and descriptive geometry facilitates their joint learning. Automation in the generation of exercises not only saves time but also increases the quality of the statements and reduces the possibility of human error.

  3. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    NASA Astrophysics Data System (ADS)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  4. A continuous arc delivery optimization algorithm for CyberKnife m6.

    PubMed

    Kearney, Vasant; Descovich, Martina; Sudhyadhom, Atchar; Cheung, Joey P; McGuinness, Christopher; Solberg, Timothy D

    2018-06-01

    This study aims to reduce the delivery time of CyberKnife m6 treatments by allowing for noncoplanar continuous arc delivery. To achieve this, a novel noncoplanar continuous arc delivery optimization algorithm was developed for the CyberKnife m6 treatment system (CyberArc-m6). CyberArc-m6 uses a five-step overarching strategy, in which an initial set of beam geometries is determined, the robotic delivery path is calculated, direct aperture optimization is conducted, intermediate MLC configurations are extracted, and the final beam weights are computed for the continuous arc radiation source model. This algorithm was implemented on five prostate and three brain patients, previously planned using a conventional step-and-shoot CyberKnife m6 delivery technique. The dosimetric quality of the CyberArc-m6 plans was assessed using locally confined mutual information (LCMI), conformity index (CI), heterogeneity index (HI), and a variety of common clinical dosimetric objectives. Using conservative optimization tuning parameters, CyberArc-m6 plans were able to achieve an average CI difference of 0.036 ± 0.025, an average HI difference of 0.046 ± 0.038, and an average LCMI of 0.920 ± 0.030 compared with the original CyberKnife m6 plans. Including a 5 s per minute image alignment time and a 5-min setup time, conservative CyberArc-m6 plans achieved an average treatment delivery speed up of 1.545x ± 0.305x compared with step-and-shoot plans. The CyberArc-m6 algorithm was able to achieve dosimetrically similar plans compared to their step-and-shoot CyberKnife m6 counterparts, while simultaneously reducing treatment delivery times. © 2018 American Association of Physicists in Medicine.

  5. Rubbing time and bonding performance of one-step adhesives to primary enamel and dentin.

    PubMed

    Botelho, Maria Paula Jacobucci; Isolan, Cristina Pereira; Schwantz, Júlia Kaster; Lopes, Murilo Baena; Moraes, Rafael Ratto de

    2017-01-01

    This study investigated whether increasing the concentration of acidic monomers in one-step adhesives would allow reducing their application time without interfering with the bonding ability to primary enamel and dentin. Experimental one-step self-etch adhesives were formulated with 5 wt% (AD5), 20 wt% (AD20), or 35 wt% (AD35) acidic monomer. The adhesives were applied using rubbing motion for 5, 10, or 20 s. Bond strengths to primary enamel and dentin were tested under shear stress. A commercial etch-and-rinse adhesive (Single Bond 2; 3M ESPE) served as reference. Scanning electron microscopy was used to observe the morphology of bonded interfaces. Data were analysed at p<0.05. In enamel, AD35 had higher bond strength when rubbed for at least 10 s, while application for 5 s generated lower bond strength. In dentin, increased acidic monomer improved bonding only for 20 s rubbing time. The etch-and-rinse adhesive yielded higher bond strength to enamel and similar bonding to dentin as compared with the self-etch adhesives. The adhesive layer was thicker and more irregular for the etch-and-rinse material, with no appreciable differences among the self-etch systems. Overall, increasing the acidic monomer concentration only led to an increase in bond strength to enamel when the rubbing time was at least 10 s. In dentin, despite the increase in bond strength with longer rubbing times, the results favoured the experimental adhesives compared to the conventional adhesive. Reduced rubbing time of self-etch adhesives should be avoided in the clinical setup.

  6. The synchronisation of lower limb responses with a variable metronome: the effect of biomechanical constraints on timing.

    PubMed

    Chen, Hui-Ya; Wing, Alan M; Pratt, David

    2006-04-01

    Stepping in time with a metronome has been reported to improve pathological gait. Although there have been many studies of finger tapping synchronisation tasks with a metronome, the specific details of the influences of metronome timing on walking remain unknown. As a preliminary to studying pathological control of gait timing, we designed an experiment with four synchronisation tasks, unilateral heel tapping in sitting, bilateral heel tapping in sitting, bilateral heel tapping in standing, and stepping on the spot, in order to examine the influence of biomechanical constraints on metronome timing. These four conditions allow study of the effects of bilateral co-ordination and maintenance of balance on timing. Eight neurologically normal participants made heel tapping and stepping responses in synchrony with a metronome producing 500 ms interpulse intervals. In each trial comprising 40 intervals, one interval, selected at random between intervals 15 and 30, was lengthened or shortened, which resulted in a shift in phase of all subsequent metronome pulses. Performance measures were the speed of compensation for the phase shift, in terms of the temporal difference between the response and the metronome pulse, i.e. asynchrony, and the standard deviation of the asynchronies and interresponse intervals of steady state synchronisation. The speed of compensation decreased with increase in the demands of maintaining balance. The standard deviation varied across conditions but was not related to the compensation speed. The implications of these findings for metronome assisted gait are discussed in terms of a first-order linear correction account of synchronisation.

  7. Rapid establishment of thermophilic anaerobic microbial community during the one-step startup of thermophilic anaerobic digestion from a mesophilic digester.

    PubMed

    Tian, Zhe; Zhang, Yu; Li, Yuyou; Chi, Yongzhi; Yang, Min

    2015-02-01

    The purpose of this study was to explore how fast the thermophilic anaerobic microbial community could be established during the one-step startup of thermophilic anaerobic digestion from a mesophilic digester. Stable thermophilic anaerobic digestion was achieved within 20 days from a mesophilic digester treating sewage sludge by adopting the one-step startup strategy. The succession of archaeal and bacterial populations over a period of 60 days after the temperature increment was followed by using 454-pyrosequencing and quantitative PCR. After the increase of temperature, thermophilic methanogenic community was established within 11 days, which was characterized by the fast colonization of Methanosarcina thermophila and two hydrogenotrophic methanogens (Methanothermobacter spp. and Methanoculleus spp.). At the same time, the bacterial community was dominated by Fervidobacterium, whose relative abundance rapidly increased from 0 to 28.52 % in 18 days, followed by other potential thermophilic genera, such as Clostridium, Coprothermobacter, Anaerobaculum and EM3. The above result demonstrated that the one-step startup strategy could allow the rapid establishment of the thermophilic anaerobic microbial community. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Effect of temporal location of correction of monochromatic aberrations on the dynamic accommodation response

    PubMed Central

    Hampson, Karen M.; Chin, Sem Sem; Mallen, Edward A. H.

    2010-01-01

    Dynamic correction of monochromatic aberrations of the eye is known to affect the accommodation response to a step change in stimulus vergence. We used an adaptive optics system to determine how the temporal location of the correction affects the response. The system consists of a Shack-Hartmann sensor sampling at 20 Hz and a 37-actuator piezoelectric deformable mirror. An extra sensing channel allows for an independent measure of the accommodation level of the eye. The accommodation response of four subjects was measured during a +/− 0.5 D step change in stimulus vergence whilst aberrations were corrected at various time locations. We found that continued correction of aberrations after the step change decreased the gain for disaccommodation, but increased the gain for accommodation. These results could be explained based on the initial lag of accommodation to the stimulus and changes in the level of aberrations before and after the stimulus step change. Future considerations for investigations of the effect of monochromatic aberrations on the dynamic accommodation response are discussed. PMID:21258515

  9. 49 CFR 40.63 - What steps does the collector take in the collection process before the employee provides a urine...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... Either you or the employee, with both of you present, must unwrap or break the seal of the collection container. You must not unwrap or break the seal on any specimen bottle at this time. You must not allow the... direct observation and the reason for doing so. [65 FR 79526, Dec. 19, 2000, as amended at 75 FR 59107...

  10. Compound Separation

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Jet Propulsion Laboratory developed a new one-step liquid-liquid extraction technique which cuts processing time, reduces costs and eliminates much of the equipment required. Technique employs disposable extraction columns, originally developed as an aid to the Los Angeles Police Department, which allow more rapid detection of drugs as part of the department's drug abuse program. Applications include medical treatment, pharmaceutical preparation and forensic chemistry. NASA waived title to Caltech, and Analytichem International is producing Extubes under Caltech license.

  11. Execution monitoring for a mobile robot system

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1990-01-01

    Due to sensor errors, uncertainty, incomplete knowledge, and a dynamic world, robot plans will not always be executed exactly as planned. This paper describes an implemented robot planning system that enhances the traditional sense-think-act cycle in ways that allow the robot system monitor its behavior and react in emergencies in real-time. A proposal on how robot systems can completely break away from the traditional three-step cycle is also made.

  12. Magnetic Resonance Imaging-Guided Adaptive Radiation Therapy: A "Game Changer" for Prostate Treatment?

    PubMed

    Pathmanathan, Angela U; van As, Nicholas J; Kerkmeijer, Linda G W; Christodouleas, John; Lawton, Colleen A F; Vesprini, Danny; van der Heide, Uulke A; Frank, Steven J; Nill, Simeon; Oelfke, Uwe; van Herk, Marcel; Li, X Allen; Mittauer, Kathryn; Ritter, Mark; Choudhury, Ananya; Tree, Alison C

    2018-02-01

    Radiation therapy to the prostate involves increasingly sophisticated delivery techniques and changing fractionation schedules. With a low estimated α/β ratio, a larger dose per fraction would be beneficial, with moderate fractionation schedules rapidly becoming a standard of care. The integration of a magnetic resonance imaging (MRI) scanner and linear accelerator allows for accurate soft tissue tracking with the capacity to replan for the anatomy of the day. Extreme hypofractionation schedules become a possibility using the potentially automated steps of autosegmentation, MRI-only workflow, and real-time adaptive planning. The present report reviews the steps involved in hypofractionated adaptive MRI-guided prostate radiation therapy and addresses the challenges for implementation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Temporal differentiation and the optimization of system output

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Emmanuel

    2008-01-01

    We develop two simplified dynamical models with which to explore the conditions under which temporal differentiation leads to increased system output. By temporal differentiation, we mean a division of labor whereby different subtasks associated with performing a given task are done at different times. The idea is that, by focusing on one particular set of subtasks at a time, it is possible to increase the efficiency with which each subtask is performed, thereby allowing for faster completion of the overall task. In the first model, we consider the filling and emptying of a tank in the presence of a time-varying resource profile. If a given resource is available, the tank may be filled at some rate rf . As long as the tank contains a resource, it may be emptied at a rate re , corresponding to processing into some product, which is either the final product of a process or an intermediate that is transported for further processing. Given a resource-availability profile over some time interval T , we develop an algorithm for determining the fill-empty profile that produces the maximum quantity of processed resource at the end of the time interval. We rigorously prove that the basic algorithm is one where the tank is filled when a resource is available and emptied when a resource is not available. In the second model, we consider a process whereby some resource is converted into some final product in a series of three agent-mediated steps. Temporal differentiation is incorporated by allowing the agents to oscillate between performing the first two steps and performing the last step. We find that temporal differentiation is favored when the number of agents is at intermediate values and when there are process intermediates that have long lifetimes compared to other characteristic time scales in the system. Based on these results, we speculate that temporal differentiation may provide an evolutionary basis for the emergence of phenomena such as sleep, distinct REM and non-REM sleep states, and circadian rhythms in general. The essential argument is that in sufficiently complex biological systems, a maximal amount of information and tasks can be processed and completed if the system follows a temporally differentiated “work plan,” whereby the system focuses on one or a few tasks at a time.

  14. Numerical solution of the incompressible Navier-Stokes equations. Ph.D. Thesis - Stanford Univ., Mar. 1989

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1990-01-01

    The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.

  15. Detonation Diffraction in a Multi-Step Channel

    DTIC Science & Technology

    2010-12-01

    openings. This allowed the detonation wave diffraction transmission limits to be determined for hydrogen/air mixtures and to better understand...imaging systems to provide shock wave detail and velocity information. The images were observed through a newly designed explosive proof optical section...stepped openings. This allowed the detonation wave diffraction transmission limits to be determined for hydrogen/air mixtures and to better

  16. When the mean is not enough: Calculating fixation time distributions in birth-death processes.

    PubMed

    Ashcroft, Peter; Traulsen, Arne; Galla, Tobias

    2015-10-01

    Studies of fixation dynamics in Markov processes predominantly focus on the mean time to absorption. This may be inadequate if the distribution is broad and skewed. We compute the distribution of fixation times in one-step birth-death processes with two absorbing states. These are expressed in terms of the spectrum of the process, and we provide different representations as forward-only processes in eigenspace. These allow efficient sampling of fixation time distributions. As an application we study evolutionary game dynamics, where invading mutants can reach fixation or go extinct. We also highlight the median fixation time as a possible analog of mixing times in systems with small mutation rates and no absorbing states, whereas the mean fixation time has no such interpretation.

  17. Brownian dynamics simulations with stiff finitely extensible nonlinear elastic-Fraenkel springs as approximations to rods in bead-rod models.

    PubMed

    Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G

    2006-01-28

    A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.

  18. MO-D-213-01: Workflow Monitoring for a High Volume Radiation Oncology Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laub, S; Dunn, M; Galbreath, G

    2015-06-15

    Purpose: Implement a center wide communication system that increases interdepartmental transparency and accountability while decreasing redundant work and treatment delays by actively monitoring treatment planning workflow. Methods: Intake Management System (IMS), a program developed by ProCure Treatment Centers Inc., is a multi-function database that stores treatment planning process information. It was devised to work with the oncology information system (Mosaiq) to streamline interdepartmental workflow.Each step in the treatment planning process is visually represented and timelines for completion of individual tasks are established within the software. The currently active step of each patient’s planning process is highlighted either red or greenmore » according to whether the initially allocated amount of time has passed for the given process. This information is displayed as a Treatment Planning Process Monitor (TPPM), which is shown on screens in the relevant departments throughout the center. This display also includes the individuals who are responsible for each task.IMS is driven by Mosaiq’s quality checklist (QCL) functionality. Each step in the workflow is initiated by a Mosaiq user sending the responsible party a QCL assignment. IMS is connected to Mosaiq and the sending or completing of a QCL updates the associated field in the TPPM to the appropriate status. Results: Approximately one patient a week is identified during the workflow process as needing to have his/her treatment start date modified or resources re-allocated to address the most urgent cases. Being able to identify a realistic timeline for planning each patient and having multiple departments communicate their limitations and time constraints allows for quality plans to be developed and implemented without overburdening any one department. Conclusion: Monitoring the progression of the treatment planning process has increased transparency between departments, which enables efficient communication. Having built-in timelines allows easy prioritization of tasks and resources and facilitates effective time management.« less

  19. Brownian dynamics simulations with stiff finitely extensible nonlinear elastic-Fraenkel springs as approximations to rods in bead-rod models

    NASA Astrophysics Data System (ADS)

    Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G.

    2006-01-01

    A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.

  20. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects

    PubMed Central

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.

    2012-01-01

    Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108

  1. Simulation methods with extended stability for stiff biochemical Kinetics.

    PubMed

    Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin

    2010-08-11

    With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

  2. Determination of pyrophosphate and sulfate using polyhexamethylene guanidine hydrochloride-stabilized silver nanoparticles.

    PubMed

    Terenteva, E A; Apyari, V V; Dmitrienko, S G; Garshev, A V; Volkov, P A; Zolotov, Yu A

    2018-04-01

    Positively charged polyhexamethylene guanidine hydrochloride-stabilized silver nanoparticles (PHMG-AgNPs) were prepared and applied as a colorimetric probe for single-step determination of pyrophosphate and sulfate. The approach is based on the nanoparticles aggregation leading to change in their absorption spectra and color of the solution. Due to both electrostatic and steric stabilization these nanoparticles show decreased sensitivity relatively to many common anions, which allows for simple and rapid direct single-step determination of pyrophosphate and sulfate. Effects of different factors (time of interaction, pH, concentrations of anions and the nanoparticles) on aggregation of PHMG-AgNPs and analytical performance of the procedure were investigated. The method allows for the determination of pyrophosphate and sulfate in the range of 0.16-2μgmL -1 and 20-80μgmL -1 with RSD of 2-5%, respectively. The analysis can be performed using either spectrophotometry or naked-eye detection. Practical application of the method was shown by the example of pyrophosphate determination in baking powder sample. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Assessing the performance of a motion tracking system based on optical joint transform correlation

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.

    2015-08-01

    We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.

  4. Operateurs et engins de calcul en virgule flottante et leur application a la simulation en temps reel sur FPGA

    NASA Astrophysics Data System (ADS)

    Ould Bachir, Tarek

    The real-time simulation of electrical networks gained a vivid industrial interest during recent years, motivated by the substantial development cost reduction that such a prototyping approach can offer. Real-time simulation allows the progressive inclusion of real hardware during its development, allowing its testing under realistic conditions. However, CPU-based simulations suffer from certain limitations such as the difficulty to reach time-steps of a few microsecond, an important challenge brought by modern power converters. Hence, industrial practitioners adopted the FPGA as a platform of choice for the implementation of calculation engines dedicated to the rapid real-time simulation of electrical networks. The reconfigurable technology broke the 5 kHz switching frequency barrier that is characteristic of CPU-based simulations. Moreover, FPGA-based real-time simulation offers many advantages, including the reduced latency of the simulation loop that is obtained thanks to a direct access to sensors and actuators. The fixed-point format is paradigmatic to FPGA-based digital signal processing. However, the format imposes a time penalty in the development process since the designer has to asses the required precision for all model variables. This fact brought an import research effort on the use of the floating-point format for the simulation of electrical networks. One of the main challenges in the use of the floating-point format are the long latencies required by the elementary arithmetic operators, particularly when an adder is used as an accumulator, an important building bloc for the implementation of integration rules such as the trapezoidal method. Hence, single-cycle floating-point accumulation forms the core of this research work. Our results help building such operators as accumulators, multiply-accumulators (MACs), and dot-product (DP) operators. These operators play a key role in the implementation of the proposed calculation engines. Therefore, this thesis contributes to the realm of FPGA-based real-time simulation in many ways. The research work proposes a new summation algorithm, which is a generalization of the so-called self-alignment technique. The new formulation is broader, simpler in its expression and hardware implementation. Our research helps formulating criteria to guarantee good accuracy, the criteria being established on a theoretical, as well as empirical basis. Moreover, the thesis offers a comprehensive analysis on the use of the redundant high radix carry-save (HRCS) format. The HRCS format is used to perform rapid additions of large mantissas. Two new HRCS operators are also proposed, namely an endomorphic adder and a HRCS to conventional converter. Once the mean to single-cycle accumulation is defined as a combination of the self-alignment technique and the HRCS format, the research focuses on the FPGA implementation of SIMD calculation engines using parallel floating-point MACs or DPs. The proposed operators are characterized by low latencies, allowing the engines to reach very low time-steps. The document finally discusses power electronic circuits modelling, and concludes with the presentation of a versatile calculation engine capable of simulating power converter with arbitrary topologies and up to 24 switches, while achieving time steps below 1 mus and allowing switching frequencies in the range of tens kilohertz. The latter realization has led to commercialization of a product by our industrial partner.

  5. Molecular strategy for identification in Aspergillus section Flavi.

    PubMed

    Godet, Marie; Munaut, Françoise

    2010-03-01

    Aspergillus flavus is one of the most common contaminants that produces aflatoxins in foodstuffs. It is also a human allergen and a pathogen of animals and plants. Aspergillus flavus is included in the Aspergillus section Flavi that comprises 11 closely related species producing different profiles of secondary metabolites. A six-step strategy has been developed that allows identification of nine of the 11 species. First, three real-time PCR reactions allowed us to discriminate four groups within the section: (1) A. flavus/Aspergillus oryzae/Aspergillus minisclerotigenes/Aspergillus parvisclerotigenus; (2) Aspergillus parasiticus/Aspergillus sojae/Aspergillus arachidicola; (3) Aspergillus tamarii/Aspergillus bombycis/Aspergillus pseudotamarii; and (4) Aspergillus nomius. Secondly, random amplification of polymorphic DNA (RAPD) amplifications or SmaI digestion allowed us to differentiate (1) A. flavus, A. oryzae and A. minisclerotigenes; (2) A. parasiticus, A. sojae and A. arachidicola; (3) A. tamarii, A. bombycis and A. pseudotamarii. Among the 11 species, only A. parvisclerotigenus cannot be differentiated from A. flavus. Using the results of real-time PCR, RAPD and SmaI digestion, a decision-making tree was drawn up to identify nine of the 11 species of section Flavi. In contrast to conventional morphological methods, which are often time-consuming, the molecular strategy proposed here is based mainly on real-time PCR, which is rapid and requires minimal handling.

  6. Enantioselective Total Synthesis of (−)-Minovincine in Nine Chemical Steps: An Approach to Ketone Activation in Cascade Catalysis

    PubMed Central

    Laforteza, Brian N.; Pickworth, Mark

    2014-01-01

    More cycling–fewer steps The first enantioselective total synthesis of (−)-minovincine has been accomplished in nine chemical steps and 13% overall yield. A novel, one-step Diels–Alder/β-elimination/conjugate addition organocascade sequence allowed rapid access to the central tetracyclic core in an asymmetric manner. PMID:24000234

  7. 40 CFR 35.2109 - Step 2+3.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Step 2+3. 35.2109 Section 35.2109... ASSISTANCE Grants for Construction of Treatment Works § 35.2109 Step 2+3. The Regional Administrator may award a Step 2+3 grant which will provide the Federal share of an allowance under appendix B and the...

  8. Postseismic rebound in fault step-overs caused by pore fluid flow

    USGS Publications Warehouse

    Peltzer, G.; Rosen, P.; Rogez, F.; Hudnut, K.

    1996-01-01

    Near-field strain induced by large crustal earthquakes results in changes in pore fluid pressure that dissipate with time and produce surface deformation. Synthetic aperture radar (SAR) interferometry revealed several centimeters of postseismic uplift in pull-apart structures and subsidence in a compressive jog along the Landers, California, 1992 earthquake surface rupture, with a relaxation time of 270 ?? 45 days. Such a postseismic rebound may be explained by the transition of the Poisson's ratio of the deformed volumes of rock from undrained to drained conditions as pore fluid flow allows pore pressure to return to hydrostatic equilibrium.

  9. A pseudospectral Legendre method for hyperbolic equations with an improved stability condition

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1986-01-01

    A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid points are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.

  10. A pseudospectral Legendre method for hyperbolic equations with an improved stability condition

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, H.

    1984-01-01

    A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.

  11. Solution of the hydrodynamic device model using high-order non-oscillatory shock capturing algorithms. [for junction diodes simulation

    NASA Technical Reports Server (NTRS)

    Fatemi, Emad; Osher, Stanley; Jerome, Joseph

    1991-01-01

    A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially nonoscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakharov, Leonic E.; Li, Xujing

    This paper formulates the Tokamak Magneto-Hydrodynamics (TMHD), initially outlined by X. Li and L.E. Zakharov [Plasma Science and Technology, accepted, ID:2013-257 (2013)] for proper simulations of macroscopic plasma dynamics. The simplest set of magneto-hydrodynamics equations, sufficient for disruption modeling and extendable to more refined physics, is explained in detail. First, the TMHD introduces to 3-D simulations the Reference Magnetic Coordinates (RMC), which are aligned with the magnetic field in the best possible way. The numerical implementation of RMC is adaptive grids. Being consistent with the high anisotropy of the tokamak plasma, RMC allow simulations at realistic, very high plasma electricmore » conductivity. Second, the TMHD splits the equation of motion into an equilibrium equation and the plasma advancing equation. This resolves the 4 decade old problem of Courant limitations of the time step in existing, plasma inertia driven numerical codes. The splitting allows disruption simulations on a relatively slow time scale in comparison with the fast time of ideal MHD instabilities. A new, efficient numerical scheme is proposed for TMHD.« less

  13. Evolution of an experiential learning partnership in emergency management higher education.

    PubMed

    Knox, Claire Connolly; Harris, Alan S

    2016-01-01

    Experiential learning allows students to step outside the classroom and into a community setting to integrate theory with practice, while allowing the community partner to reach goals or address needs within their organization. Emergency Management and Homeland Security scholars recognize the importance, and support the increased implementation, of this pedagogical method in the higher education curriculum. Yet challenges to successful implementation exist including limited resources and time. This longitudinal study extends the literature by detailing the evolution of a partnership between a university and office of emergency management in which a functional exercise is strategically integrated into an undergraduate course. The manuscript concludes with a discussion of lessons learned from throughout the multiyear process.

  14. VoxelMages: a general-purpose graphical interface for designing geometries and processing DICOM images for PENELOPE.

    PubMed

    Giménez-Alventosa, V; Ballester, F; Vijande, J

    2016-12-01

    The design and construction of geometries for Monte Carlo calculations is an error-prone, time-consuming, and complex step in simulations describing particle interactions and transport in the field of medical physics. The software VoxelMages has been developed to help the user in this task. It allows to design complex geometries and to process DICOM image files for simulations with the general-purpose Monte Carlo code PENELOPE in an easy and straightforward way. VoxelMages also allows to import DICOM-RT structure contour information as delivered by a treatment planning system. Its main characteristics, usage and performance benchmarking are described in detail. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Gas diffusion as a new fluidic unit operation for centrifugal microfluidic platforms.

    PubMed

    Ymbern, Oriol; Sández, Natàlia; Calvo-López, Antonio; Puyol, Mar; Alonso-Chamarro, Julian

    2014-03-07

    A centrifugal microfluidic platform prototype with an integrated membrane for gas diffusion is presented for the first time. The centrifugal platform allows multiple and parallel analysis on a single disk and integrates at least ten independent microfluidic subunits, which allow both calibration and sample determination. It is constructed with a polymeric substrate material and it is designed to perform colorimetric determinations by the use of a simple miniaturized optical detection system. The determination of three different analytes, sulfur dioxide, nitrite and carbon dioxide, is carried out as a proof of concept of a versatile microfluidic system for the determination of analytes which involve a gas diffusion separation step during the analytical procedure.

  16. The validity of the ActiPed for physical activity monitoring.

    PubMed

    Brown, D K; Grimwade, D; Martinez-Bussion, D; Taylor, M J D; Gladwell, V F

    2013-05-01

    The ActiPed (FitLinxx) is a uniaxial accelerometer, which objectively measures physical activity, uploads the data wirelessly to a website, allowing participants and researchers to view activity levels remotely. The aim was to validate ActiPed's step count, distance travelled and activity time against direct observation. Further, to compare against pedometer (YAMAX), accelerometer (ActiGraph) and manufacturer's guidelines. 22 participants, aged 28±7 years, undertook 4 protocols, including walking on different surfaces and incremental running protocol (from 2 mph to 8 mph). Bland-Altman plots allowed comparison of direct observation against ActiPed estimates. For step count, the ActiPed showed a low % bias in all protocols: walking on a treadmill (-1.30%), incremental treadmill protocol (-1.98%), walking over grass (-1.67%), and walking over concrete (-0.93%). When differentiating between walking and running step count the ActiPed showed a % bias of 4.10% and -6.30%, respectively. The ActiPed showed >95% accuracy for distance and duration estimations overall, although underestimated distance (p<0.01) for walking over grass and concrete. Overall, the ActiPed showed acceptable levels of accuracy comparable to previous validated pedometers and accelerometers. The accuracy combined with the simple and informative remote gathering of data, suggests that the ActiPed could be a useful tool in objective physical activity monitoring. © Georg Thieme Verlag KG Stuttgart · New York.

  17. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI

    PubMed Central

    Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain

    2018-01-01

    Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg). Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency. PMID:29497372

  18. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI.

    PubMed

    Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain

    2018-01-01

    Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg) . Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.

  19. Inorganic and Protein Crystal Assembly in Solutions

    NASA Technical Reports Server (NTRS)

    Chernov, A. A.

    2005-01-01

    The basic kinetic and thermodynamic concepts of crystal growth will be revisited in view of recent AFM and interferometric findings. These concepts are as follows: 1) The Kossel crystal model that allows only one kink type on the crystal surface. The modern theory is developed overwhelmingly for the Kessel model; 2) Presumption that intensive step fluctuations maintain kink density sufficiently high to allow applicability of Gibbs-Thomson law; 3) Common experience that unlimited step bunching (morphological instability) during layer growth from solutions and supercooled melts always takes place if the step flow direction coincides with that of the fluid.

  20. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    DOE PAGES

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...

    2017-06-09

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less

  1. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya

    2017-10-01

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.

  2. Improvement in Parenteral Nutrition-Associated Cholestasis With the Use of Omegaven in an Infant With Short Bowel Syndrome.

    PubMed

    Strang, Brian J; Reddix, Bruce A; Wolk, Robert A

    2016-10-01

    Parenteral nutrition-associated cholestasis (PNAC) and liver disease have been associated with soybean oil-based intravenous fat emulsions (IVFEs). The benefit of fish oil-based IVFEs in the reversal of parenteral nutrition (PN)-associated liver damage includes allowing for longer PN duration without immediate need for bowel or liver transplantation. The present case involves an infant born with short bowel syndrome (SBS) requiring long-term PN with development of PNAC and subsequent administration of a fish oil-based IVFE. An infant born with SBS was initiated on PN and enteral feeds. After failed enteral progression, bowel lengthening by serial transverse enteroplasty (STEP) resulted in postoperative ileus with delayed enteral feeding for 4 weeks. The administration of long-term PN led to development of PNAC, resulting in initiation of a fish oil-based IVFE. After 4 months, the cholestasis had resolved. Despite the STEP, at 16 months, the child required bowel tapering due to inability to advance enteral feeding. Fish oil-based IVFE was effectively used to reverse PNAC in a child with SBS. Despite early STEP, the patient was not able to tolerate enteral feedings and required bowel tapering. This case illustrates that early surgical intervention did not allow for improved feed tolerance. This resulted in a significant period without enteral nutrition, leading to development of cholestasis. The use of fish oil-based IVFE may permit a longer duration of PN administration without the development of cholestasis or liver disease, allowing for longer time for bowel adaptation prior to the need for surgical intervention. © 2016 American Society for Parenteral and Enteral Nutrition.

  3. From MIMO-OFDM Algorithms to a Real-Time Wireless Prototype: A Systematic Matlab-to-Hardware Design Flow

    NASA Astrophysics Data System (ADS)

    Weijers, Jan-Willem; Derudder, Veerle; Janssens, Sven; Petré, Frederik; Bourdoux, André

    2006-12-01

    To assess the performance of forthcoming 4th generation wireless local area networks, the algorithmic functionality is usually modelled using a high-level mathematical software package, for instance, Matlab. In order to validate the modelling assumptions against the real physical world, the high-level functional model needs to be translated into a prototype. A systematic system design methodology proves very valuable, since it avoids, or, at least reduces, numerous design iterations. In this paper, we propose a novel Matlab-to-hardware design flow, which allows to map the algorithmic functionality onto the target prototyping platform in a systematic and reproducible way. The proposed design flow is partly manual and partly tool assisted. It is shown that the proposed design flow allows to use the same testbench throughout the whole design flow and avoids time-consuming and error-prone intermediate translation steps.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kristine Barrett; Shannon Bragg-Sitton

    The Advanced Light Water Reactor (LWR) Nuclear Fuel Development Research and Development (R&D) Pathway encompasses strategic research focused on improving reactor core economics and safety margins through the development of an advanced fuel cladding system. To achieve significant operating improvements while remaining within safety boundaries, significant steps beyond incremental improvements in the current generation of nuclear fuel are required. Fundamental improvements are required in the areas of nuclear fuel composition, cladding integrity, and the fuel/cladding interaction to allow power uprates and increased fuel burn-up allowance while potentially improving safety margin through the adoption of an “accident tolerant” fuel system thatmore » would offer improved coping time under accident scenarios. With a development time of about 20 – 25 years, advanced fuel designs must be started today and proven in current reactors if future reactor designs are to be able to use them with confidence.« less

  5. General Methods for Analysis of Sequential “n-step” Kinetic Mechanisms: Application to Single Turnover Kinetics of Helicase-Catalyzed DNA Unwinding

    PubMed Central

    Lucius, Aaron L.; Maluf, Nasib K.; Fischer, Christopher J.; Lohman, Timothy M.

    2003-01-01

    Helicase-catalyzed DNA unwinding is often studied using “all or none” assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using “n-step” sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the “kinetic step size”, m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using “n-step” sequential mechanisms has previously been limited by an inability to float the number of “unwinding steps”, n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, fss(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain fss(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation. PMID:14507688

  6. DECISION-MAKING ALIGNED WITH RAPID-CYCLE EVALUATION IN HEALTH CARE.

    PubMed

    Schneeweiss, Sebastian; Shrank, William H; Ruhl, Michael; Maclure, Malcolm

    2015-01-01

    Availability of real-time electronic healthcare data provides new opportunities for rapid-cycle evaluation (RCE) of health technologies, including healthcare delivery and payment programs. We aim to align decision-making processes with stages of RCE to optimize the usefulness and impact of rapid results. Rational decisions about program adoption depend on program effect size in relation to externalities, including implementation cost, sustainability, and likelihood of broad adoption. Drawing on case studies and experience from drug safety monitoring, we examine how decision makers have used scientific evidence on complex interventions in the past. We clarify how RCE alters the nature of policy decisions; develop the RAPID framework for synchronizing decision-maker activities with stages of RCE; and provide guidelines on evidence thresholds for incremental decision-making. In contrast to traditional evaluations, RCE provides early evidence on effectiveness and facilitates a stepped approach to decision making in expectation of future regularly updated evidence. RCE allows for identification of trends in adjusted effect size. It supports adapting a program in midstream in response to interim findings, or adapting the evaluation strategy to identify true improvements earlier. The 5-step RAPID approach that utilizes the cumulating evidence of program effectiveness over time could increase policy-makers' confidence in expediting decisions. RCE enables a step-wise approach to HTA decision-making, based on gradually emerging evidence, reducing delays in decision-making processes after traditional one-time evaluations.

  7. A Wireless Sensor System for Real-Time Measurement of Pressure Profiles at Lower Limb Protheses to Ensure Proper Fitting

    DTIC Science & Technology

    2011-10-01

    been developed. The next step is to develop a the base technology into a grid like mapping sensor, construct the excitation and detection circuits...the project involves advancing the base technology into a grid -like mapping se nsor, constructing the excitation and detection circuits, modifying and...further. In conclusion, the screen printing and etching process allows for precise repeat able production of sensing elements for grid fabrication

  8. User intent prediction with a scaled conjugate gradient trained artificial neural network for lower limb amputees using a powered prosthesis.

    PubMed

    Woodward, Richard B; Spanias, John A; Hargrove, Levi J

    2016-08-01

    Powered lower limb prostheses have the ability to provide greater mobility for amputee patients. Such prostheses often have pre-programmed modes which can allow activities such as climbing stairs and descending ramps, something which many amputees struggle with when using non-powered limbs. Previous literature has shown how pattern classification can allow seamless transitions between modes with a high accuracy and without any user interaction. Although accurate, training and testing each subject with their own dependent data is time consuming. By using subject independent datasets, whereby a unique subject is tested against a pooled dataset of other subjects, we believe subject training time can be reduced while still achieving an accurate classification. We present here an intent recognition system using an artificial neural network (ANN) with a scaled conjugate gradient learning algorithm to classify gait intention with user-dependent and independent datasets for six unilateral lower limb amputees. We compare these results against a linear discriminant analysis (LDA) classifier. The ANN was found to have significantly lower classification error (P<;0.05) than LDA with all user-dependent step-types, as well as transitional steps for user-independent datasets. Both types of classifiers are capable of making fast decisions; 1.29 and 2.83 ms for the LDA and ANN respectively. These results suggest that ANNs can provide suitable and accurate offline classification in prosthesis gait prediction.

  9. Integration of Tidal Prism Model and HSPF for simulating indicator bacteria in coastal watersheds

    NASA Astrophysics Data System (ADS)

    Sobel, Rose S.; Rifai, Hanadi S.; Petersen, Christina M.

    2017-09-01

    Coastal water quality is strongly influenced by tidal fluctuations and water chemistry. There is a need for rigorous models that are not computationally or economically prohibitive, but still allow simulation of the hydrodynamics and bacteria sources for coastal, tidally influenced streams and bayous. This paper presents a modeling approach that links a Tidal Prism Model (TPM) implemented in an Excel-based modeling environment with a watershed runoff model (Hydrologic Simulation Program FORTRAN, HSPF) for such watersheds. The TPM is a one-dimensional mass balance approach that accounts for loading from tidal exchange, runoff, point sources and bacteria die-off at an hourly time step resolution. The novel use of equal high-resolution time steps in this study allowed seamless integration of the TPM and HSPF. The linked model was calibrated to flow and E. Coli data (for HSPF), and salinity and enterococci data (for the TPM) for a coastal stream in Texas. Sensitivity analyses showed the TPM to be most influenced by changes in net decay rates followed by tidal and runoff loads, respectively. Management scenarios were evaluated with the developed linked models to assess the impact of runoff load reductions and improved wastewater treatment plant quality and to determine the areas of critical need for such reductions. Achieving water quality standards for bacteria required load reductions that ranged from zero to 90% for the modeled coastal stream.

  10. Methodology For Reduction Of Sampling On The Visual Inspection Of Developed And Etched Wafers

    NASA Astrophysics Data System (ADS)

    van de Ven, Jamie S.; Khorasani, Fred

    1989-07-01

    There is a lot of inspection in the manufacturing of semiconductor devices. Generally, the more important a manufacturing step, the higher is the level of inspection. In some cases 100% of the wafers are inspected after certain steps. Inspection is a non-value added and expensive activity. It requires an army of "inspectors," often times expensive equipment and becomes a "bottle neck" when the level of inspection is high. Although inspection helps identify quality problems, it hurts productivity. The new management, quality and productivity philosophies recommend against over inspection. [Point #3 in Dr. Deming's 14 Points for Management (1)] 100% inspection is quite unnecessary . Often the nature of a process allows us to reduce inspection drastically and still maintain a high level of confidence in quality. In section 2, we discuss such situations and show that some elementary probability theory allows us to determine sample sizes and measure the chances of catching a bad "lot" and accepting a good lot. In section 3, we provide an example and application of the theory, and make a few comments on money and time saved because of this work. Finally, in section 4, we draw some conclusions about the new quality and productivity philosophies and how applied statisticians and engineers should study every situation individually and avoid blindly using methods and tables given in books.

  11. Analogue step-by-step DC component eliminator for 24-hour PPG signal monitoring.

    PubMed

    Pilt, Kristjan; Meigas, Kalju; Lass, Jaanus; Rosmann, Mart; Kaik, Jüri

    2007-01-01

    For applications where PPG signal AC component needs to be measured without disturbances in its shape and recorded digitally with high digitalization accuracy, the step-by-step DC component eliminator is developed. This paper describes step-by-step DC component eliminator, which is utilized with analogue comparator and operational amplifier. It allows to record PPG signal without disturbances in its shape in 24-hours PPG signal monitoring system. The experiments with PPG signal have been carried out.

  12. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Technical Reports Server (NTRS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-01-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  13. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Astrophysics Data System (ADS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-11-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  14. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  15. Experimental study of the flow over a backward-facing rounded ramp

    NASA Astrophysics Data System (ADS)

    Duriez, Thomas; Aider, Jean-Luc; Wesfreid, Jose Eduardo

    2010-11-01

    The backward-facing rounded ramp (BFR) is a very simple geometry leading to boundary layer separation, close to the backward facing step (BFS) flow. The main difference with the BFS flow is that the separation location depends on the incoming flow while it is fixed to the step edge for the BFS flow. Despite the simplicity of the geometry, the flow is complex and the transition process still has to be investigated. In this study we investigate the BFR flow using time-resolved PIV. For Reynolds number ranging between 300 and 12 000 we first study the time averaged properties such as the positions of the separation and reattachment, the recirculation length and the shear layer thickness. The time resolution also gives access to the characteristic frequencies of the time-dependant flow. An appropriate Fourier filtering of the flow field, around each frequency peak in the global spectrum, allows an investigation of each mode in order to extract its wavelength, phase velocity, and spatial distribution. We then sort the spectral content and relate the main frequencies to the most amplified Kelvin-Helmholtz instability mode and its harmonics, the vortex pairing, the low frequency recirculation bubble oscillation and the interactions between all these phenomena.

  16. An automated workflow for parallel processing of large multiview SPIM recordings.

    PubMed

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-04-01

    Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  17. A three operator split-step method covering a larger set of non-linear partial differential equations

    NASA Astrophysics Data System (ADS)

    Zia, Haider

    2017-06-01

    This paper describes an updated exponential Fourier based split-step method that can be applied to a greater class of partial differential equations than previous methods would allow. These equations arise in physics and engineering, a notable example being the generalized derivative non-linear Schrödinger equation that arises in non-linear optics with self-steepening terms. These differential equations feature terms that were previously inaccessible to model accurately with low computational resources. The new method maintains a 3rd order error even with these additional terms and models the equation in all three spatial dimensions and time. The class of non-linear differential equations that this method applies to is shown. The method is fully derived and implementation of the method in the split-step architecture is shown. This paper lays the mathematical ground work for an upcoming paper employing this method in white-light generation simulations in bulk material.

  18. A time to search: finding the meaning of variable activation energy.

    PubMed

    Vyazovkin, Sergey

    2016-07-28

    This review deals with the phenomenon of variable activation energy frequently observed when studying the kinetics in the liquid or solid phase. This phenomenon commonly manifests itself through nonlinear Arrhenius plots or dependencies of the activation energy on conversion computed by isoconversional methods. Variable activation energy signifies a multi-step process and has a meaning of a collective parameter linked to the activation energies of individual steps. It is demonstrated that by using appropriate models of the processes, the link can be established in algebraic form. This allows one to analyze experimentally observed dependencies of the activation energy in a quantitative fashion and, as a result, to obtain activation energies of individual steps, to evaluate and predict other important parameters of the process, and generally to gain deeper kinetic and mechanistic insights. This review provides multiple examples of such analysis as applied to the processes of crosslinking polymerization, crystallization and melting of polymers, gelation, and solid-solid morphological and glass transitions. The use of appropriate computational techniques is discussed as well.

  19. Helicase Stepping Investigated with One-Nucleotide Resolution Fluorescence Resonance Energy Transfer

    NASA Astrophysics Data System (ADS)

    Lin, Wenxia; Ma, Jianbing; Nong, Daguan; Xu, Chunhua; Zhang, Bo; Li, Jinghua; Jia, Qi; Dou, Shuoxing; Ye, Fangfu; Xi, Xuguang; Lu, Ying; Li, Ming

    2017-09-01

    Single-molecule Förster resonance energy transfer is widely applied to study helicases by detecting distance changes between a pair of dyes anchored to overhangs of a forked DNA. However, it has been lacking single-base pair (1-bp) resolution required for revealing stepping kinetics of helicases. We designed a nanotensioner in which a short DNA is bent to exert force on the overhangs, just as in optical or magnetic tweezers. The strategy improved the resolution of Förster resonance energy transfer to 0.5 bp, high enough to uncover differences in DNA unwinding by yeast Pif1 and E. coli RecQ whose unwinding behaviors cannot be differentiated by currently practiced methods. We found that Pif1 exhibits 1-bp-stepping kinetics, while RecQ breaks 1 bp at a time but sequesters the nascent nucleotides and releases them randomly. The high-resolution data allowed us to propose a three-parameter model to quantitatively interpret the apparently different unwinding behaviors of the two helicases which belong to two superfamilies.

  20. Motofit - integrating neutron reflectometry acquisition, reduction and analysis into one, easy to use, package

    NASA Astrophysics Data System (ADS)

    Nelson, Andrew

    2010-11-01

    The efficient use of complex neutron scattering instruments is often hindered by the complex nature of their operating software. This complexity exists at each experimental step: data acquisition, reduction and analysis, with each step being as important as the previous. For example, whilst command line interfaces are powerful at automated acquisition they often reduce accessibility by novice users and sometimes reduce the efficiency for advanced users. One solution to this is the development of a graphical user interface which allows the user to operate the instrument by a simple and intuitive "push button" approach. This approach was taken by the Motofit software package for analysis of multiple contrast reflectometry data. Here we describe the extension of this package to cover the data acquisition and reduction steps for the Platypus time-of-flight neutron reflectometer. Consequently, the complete operation of an instrument is integrated into a single, easy to use, program, leading to efficient instrument usage.

  1. Controlling superconductivity in La 2-xSr xCuO 4+δ by ozone and vacuum annealing

    DOE PAGES

    Leng, Xiang; Bozovic, Ivan

    2014-11-21

    In this study we performed a series of ozone and vacuum annealing experiments on epitaxial La 2-xSr xCuO 4+δ thin films. The transition temperature after each annealing step has been measured by the mutual inductance technique. The relationship between the effective doping and the vacuum annealing time has been studied. Short-time ozone annealing at 470 °C oxidizes an underdoped film all the way to the overdoped regime. The subsequent vacuum annealing at 350 °C to 380 °C slowly brings the sample across the optimal doping point back to the undoped, non-superconducting state. Several ozone and vacuum annealing cycles have beenmore » done on the same sample and the effects were found to be repeatable and reversible Vacuum annealing of ozone-loaded LSCO films is a very controllable process, allowing one to tune the doping level of LSCO in small steps across the superconducting dome, which can be used for fundamental physics studies.« less

  2. How to Deal with Interval-Censored Data Practically while Assessing the Progression-Free Survival: A Step-by-Step Guide Using SAS and R Software.

    PubMed

    Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn

    2016-12-01

    We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.

  3. New subtraction algorithms for evaluation of lesions on dynamic contrast-enhanced MR mammography.

    PubMed

    Choi, Byung Gil; Kim, Hak Hee; Kim, Euy Neyng; Kim, Bum-soo; Han, Ji-Youn; Yoo, Seung-Schik; Park, Seog Hee

    2002-12-01

    We report new subtraction algorithms for the detection of lesions in dynamic contrast-enhanced MR mammography(CE MRM). Twenty-five patients with suspicious breast lesions underwent dynamic CE MRM using 3D fast low-angle shot. After the acquisition of the T1-weighted scout images, dynamic images were acquired six times after the bolus injection of contrast media. Serial subtractions, step-by-step subtractions, and reverse subtractions, were performed. Two radiologists attempted to differentiate benign from malignant lesion in consensus. The sensitivity, specificity, and accuracy of the method leading to the differentiation of malignant tumor from benign lesions were 85.7, 100, and 96%, respectively. Subtraction images allowed for better visualization of the enhancement as well as its temporal pattern than visual inspection of dynamic images alone. Our findings suggest that the new subtraction algorithm is adequate for screening malignant breast lesions and can potentially replace the time-intensity profile analysis on user-selected regions of interest.

  4. Controlling superconductivity in La 2-xSr xCuO 4+δ by ozone and vacuum annealing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leng, Xiang; Bozovic, Ivan

    In this study we performed a series of ozone and vacuum annealing experiments on epitaxial La 2-xSr xCuO 4+δ thin films. The transition temperature after each annealing step has been measured by the mutual inductance technique. The relationship between the effective doping and the vacuum annealing time has been studied. Short-time ozone annealing at 470 °C oxidizes an underdoped film all the way to the overdoped regime. The subsequent vacuum annealing at 350 °C to 380 °C slowly brings the sample across the optimal doping point back to the undoped, non-superconducting state. Several ozone and vacuum annealing cycles have beenmore » done on the same sample and the effects were found to be repeatable and reversible Vacuum annealing of ozone-loaded LSCO films is a very controllable process, allowing one to tune the doping level of LSCO in small steps across the superconducting dome, which can be used for fundamental physics studies.« less

  5. A LEAN approach toward automated analysis and data processing of polymers using proton NMR spectroscopy.

    PubMed

    de Brouwer, Hans; Stegeman, Gerrit

    2011-02-01

    To maximize utilization of expensive laboratory instruments and to make most effective use of skilled human resources, the entire chain of data processing, calculation, and reporting that is needed to transform raw NMR data into meaningful results was automated. The LEAN process improvement tools were used to identify non-value-added steps in the existing process. These steps were eliminated using an in-house developed software package, which allowed us to meet the key requirement of improving quality and reliability compared with the existing process while freeing up valuable human resources and increasing productivity. Reliability and quality were improved by the consistent data treatment as performed by the software and the uniform administration of results. Automating a single NMR spectrophotometer led to a reduction in operator time of 35%, doubling of the annual sample throughput from 1400 to 2800, and reducing the turn around time from 6 days to less than 2. Copyright © 2011 Society for Laboratory Automation and Screening. Published by Elsevier Inc. All rights reserved.

  6. Cenozoic climate changes: A review based on time series analysis of marine benthic δ18O records

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bickert, Torsten; Lear, Caroline H.; Lohmann, Gerrit

    2014-09-01

    The climate during the Cenozoic era changed in several steps from ice-free poles and warm conditions to ice-covered poles and cold conditions. Since the 1950s, a body of information on ice volume and temperature changes has been built up predominantly on the basis of measurements of the oxygen isotopic composition of shells of benthic foraminifera collected from marine sediment cores. The statistical methodology of time series analysis has also evolved, allowing more information to be extracted from these records. Here we provide a comprehensive view of Cenozoic climate evolution by means of a coherent and systematic application of time series analytical tools to each record from a compilation spanning the interval from 4 to 61 Myr ago. We quantitatively describe several prominent features of the oxygen isotope record, taking into account the various sources of uncertainty (including measurement, proxy noise, and dating errors). The estimated transition times and amplitudes allow us to assess causal climatological-tectonic influences on the following known features of the Cenozoic oxygen isotopic record: Paleocene-Eocene Thermal Maximum, Eocene-Oligocene Transition, Oligocene-Miocene Boundary, and the Middle Miocene Climate Optimum. We further describe and causally interpret the following features: Paleocene-Eocene warming trend, the two-step, long-term Eocene cooling, and the changes within the most recent interval (Miocene-Pliocene). We review the scope and methods of constructing Cenozoic stacks of benthic oxygen isotope records and present two new latitudinal stacks, which capture besides global ice volume also bottom water temperatures at low (less than 30°) and high latitudes. This review concludes with an identification of future directions for data collection, statistical method development, and climate modeling.

  7. The influence of temperature on brittle creep in sandstones

    NASA Astrophysics Data System (ADS)

    Heap, M. J.; Baud, P.; Meredith, P. G.; Vinciguerra, S.

    2009-04-01

    The characterization of time-dependent brittle rock deformation is fundamental to understanding the long-term evolution and dynamics of the Earth's upper crust. The presence of water promotes time-dependent deformation through environment-dependent stress corrosion cracking that allows rocks to deform at stresses far below their short-term failure stress. Here we report results from an experimental study of the influence of an elevated temperature on time-dependent brittle creep in water-saturated samples of Darley Dale (initial porosity of 13%), Bentheim (23%) and Crab Orchard (4%) sandstones. We present results from both conventional creep experiments (or ‘static fatigue' tests) and stress-stepping creep experiments performed under 20°C and 75°C and an effective confining pressure of 30 MPa (50 MPa confining pressure and a 20 MPa pore fluid pressure). The evolution of crack damage was monitored throughout each experiment by measuring the three proxies for damage (1) axial strain (2) pore volume change and (3) the output of AE energy. Conventional creep experiments have demonstrated that, for any given applied differential stress, the time-to-failure is dramatically reduced and the creep strain rate is significantly increased by application of an elevated temperature. Stress-stepping creep experiments have allowed us to investigate the influence of temperature in detail. Results from these experiments show that the creep strain rate for Darley Dale and Bentheim sandstones increases by approximately 3 orders of magnitude, and for Crab Orchard sandstone increases by approximately 2 orders of magnitude, as temperature is increased from 20°C to 75°C at a fixed effective differential stress. We discuss these results in the context of the different mineralogical and microstructural properties of the three rock types and the micro-mechanical and chemical processes operating on them.

  8. Multilevel resistive information storage and retrieval

    DOEpatents

    Lohn, Andrew; Mickel, Patrick R.

    2016-08-09

    The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified information retrieval.

  9. Data analysis of the benefits of an electronic registry of information in a neonatal intensive care unit in Greece.

    PubMed

    Skouroliakou, Maria; Soloupis, George; Gounaris, Antonis; Charitou, Antonia; Papasarantopoulos, Petros; Markantonis, Sophia L; Golna, Christina; Souliotis, Kyriakos

    2008-07-28

    This study assesses the results of implementation of a software program that allows for input of admission/discharge summary data (including cost) in a neonatal intensive care unit (NICU) in Greece, based on the establishment of a baseline statistical database for infants treated in a NICU and the statistical analysis of epidemiological and resource utilization data thus collected. A software tool was designed, developed, and implemented between April 2004 and March 2005 in the NICU of the LITO private maternity hospital in Athens, Greece, to allow for the first time for step-by-step collection and management of summary treatment data. Data collected over this period were subsequently analyzed using defined indicators as a basis to extract results related to treatment options, treatment duration, and relative resource utilization. Data for 499 babies were entered in the tool and processed. Information on medical costs (e.g., mean total cost +/- SD of treatment was euro310.44 +/- 249.17 and euro6704.27 +/- 4079.53 for babies weighing more than 2500 g and 1000-1500 g respectively), incidence of complications or disease (e.g., 4.3 percent and 14.3 percent of study babies weighing 1,000 to 1,500 g suffered from cerebral bleeding [grade I] and bronchopulmonary dysplasia, respectively, while overall 6.0 percent had microbial infections), and medical statistics (e.g., perinatal mortality was 6.8 percent) was obtained in a quick and robust manner. The software tool allowed for collection and analysis of data traditionally maintained in paper medical records in the NICU with greater ease and accuracy. Data codification and analysis led to significant findings at the epidemiological, medical resource utilization, and respective hospital cost levels that allowed comparisons with literature findings for the first time in Greece. The tool thus contributed to a clearer understanding of treatment practices in the NICU and set the baseline for the assessment of the impact of future interventions at the policy or hospital level.

  10. Persistent-random-walk approach to anomalous transport of self-propelled particles

    NASA Astrophysics Data System (ADS)

    Sadjadi, Zeinab; Shaebani, M. Reza; Rieger, Heiko; Santen, Ludger

    2015-06-01

    The motion of self-propelled particles is modeled as a persistent random walk. An analytical framework is developed that allows the derivation of exact expressions for the time evolution of arbitrary moments of the persistent walk's displacement. It is shown that the interplay of step length and turning angle distributions and self-propulsion produces various signs of anomalous diffusion at short time scales and asymptotically a normal diffusion behavior with a broad range of diffusion coefficients. The crossover from the anomalous short-time behavior to the asymptotic diffusion regime is studied and the parameter dependencies of the crossover time are discussed. Higher moments of the displacement distribution are calculated and analytical expressions for the time evolution of the skewness and the kurtosis of the distribution are presented.

  11. Impact of non-uniform correlation structure on sample size and power in multiple-period cluster randomised trials.

    PubMed

    Kasza, J; Hemming, K; Hooper, R; Matthews, Jns; Forbes, A B

    2017-01-01

    Stepped wedge and cluster randomised crossover trials are examples of cluster randomised designs conducted over multiple time periods that are being used with increasing frequency in health research. Recent systematic reviews of both of these designs indicate that the within-cluster correlation is typically taken account of in the analysis of data using a random intercept mixed model, implying a constant correlation between any two individuals in the same cluster no matter how far apart in time they are measured: within-period and between-period intra-cluster correlations are assumed to be identical. Recently proposed extensions allow the within- and between-period intra-cluster correlations to differ, although these methods require that all between-period intra-cluster correlations are identical, which may not be appropriate in all situations. Motivated by a proposed intensive care cluster randomised trial, we propose an alternative correlation structure for repeated cross-sectional multiple-period cluster randomised trials in which the between-period intra-cluster correlation is allowed to decay depending on the distance between measurements. We present results for the variance of treatment effect estimators for varying amounts of decay, investigating the consequences of the variation in decay on sample size planning for stepped wedge, cluster crossover and multiple-period parallel-arm cluster randomised trials. We also investigate the impact of assuming constant between-period intra-cluster correlations instead of decaying between-period intra-cluster correlations. Our results indicate that in certain design configurations, including the one corresponding to the proposed trial, a correlation decay can have an important impact on variances of treatment effect estimators, and hence on sample size and power. An R Shiny app allows readers to interactively explore the impact of correlation decay.

  12. New type of transformerless high efficiency inverter

    NASA Astrophysics Data System (ADS)

    Naaijer, G. J.

    Inverter architectures are presented which allow economical ac/dc switching for solar cell array and battery power use in domestic and industrial applications. The efficiencies of currently available inverters are examined and compared with a new 2.2 kW transformerless stepped wave inverter. The inverter has low no-load losses, amounting to 200 Wh/24 hr, and features voltage steps occurring 15-30 times/sine wave period. An example is provided for an array/battery/inverter assembly with the inverter control electronics activating or disconnecting the battery subassemblies based on the total number of activated subassemblies in relation to a reference sinewave, and the need to average the battery subassembly discharge rates. A total harmonic distortion of 6 percent was observed, and the system is noted to be usable as a battery charger.

  13. Towards potential nanoparticle contrast agents: Synthesis of new functionalized PEG bisphosphonates.

    PubMed

    Kachbi-Khelfallah, Souad; Monteil, Maelle; Cortes-Clerget, Margery; Migianu-Griffoni, Evelyne; Pirat, Jean-Luc; Gager, Olivier; Deschamp, Julia; Lecouvey, Marc

    2016-01-01

    The use of nanotechnologies for biomedical applications took a real development during these last years. To allow an effective targeting for biomedical imaging applications, the adsorption of plasmatic proteins on the surface of nanoparticles must be prevented to reduce the hepatic capture and increase the plasmatic time life. In biologic media, metal oxide nanoparticles are not stable and must be coated by biocompatible organic ligands. The use of phosphonate ligands to modify the nanoparticle surface drew a lot of attention in the last years for the design of highly functional hybrid materials. Here, we report a methodology to synthesize bisphosphonates having functionalized PEG side chains with different lengths. The key step is a procedure developed in our laboratory to introduce the bisphosphonate from acyl chloride and tris(trimethylsilyl)phosphite in one step.

  14. Rubbing time and bonding performance of one-step adhesives to primary enamel and dentin

    PubMed Central

    Botelho, Maria Paula Jacobucci; Isolan, Cristina Pereira; Schwantz, Júlia Kaster; Lopes, Murilo Baena; de Moraes, Rafael Ratto

    2017-01-01

    Abstract Objectives: This study investigated whether increasing the concentration of acidic monomers in one-step adhesives would allow reducing their application time without interfering with the bonding ability to primary enamel and dentin. Material and methods: Experimental one-step self-etch adhesives were formulated with 5 wt% (AD5), 20 wt% (AD20), or 35 wt% (AD35) acidic monomer. The adhesives were applied using rubbing motion for 5, 10, or 20 s. Bond strengths to primary enamel and dentin were tested under shear stress. A commercial etch-and-rinse adhesive (Single Bond 2; 3M ESPE) served as reference. Scanning electron microscopy was used to observe the morphology of bonded interfaces. Data were analysed at p<0.05. Results: In enamel, AD35 had higher bond strength when rubbed for at least 10 s, while application for 5 s generated lower bond strength. In dentin, increased acidic monomer improved bonding only for 20 s rubbing time. The etch-and-rinse adhesive yielded higher bond strength to enamel and similar bonding to dentin as compared with the self-etch adhesives. The adhesive layer was thicker and more irregular for the etch-and-rinse material, with no appreciable differences among the self-etch systems. Conclusion: Overall, increasing the acidic monomer concentration only led to an increase in bond strength to enamel when the rubbing time was at least 10 s. In dentin, despite the increase in bond strength with longer rubbing times, the results favoured the experimental adhesives compared to the conventional adhesive. Reduced rubbing time of self-etch adhesives should be avoided in the clinical setup. PMID:29069150

  15. A transient response analysis of the space shuttle vehicle during liftoff

    NASA Technical Reports Server (NTRS)

    Brunty, J. A.

    1990-01-01

    A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.

  16. Discrete-time Quantum Walks via Interchange Framework and Memory in Quantum Evolution

    NASA Astrophysics Data System (ADS)

    Dimcovic, Zlatko

    One of the newer and rapidly developing approaches in quantum computing is based on "quantum walks," which are quantum processes on discrete space that evolve in either discrete or continuous time and are characterized by mixing of components at each step. The idea emerged in analogy with the classical random walks and stochastic techniques, but these unitary processes are very different even as they have intriguing similarities. This thesis is concerned with study of discrete-time quantum walks. The original motivation from classical Markov chains required for discrete-time quantum walks that one adds an auxiliary Hilbert space, unrelated to the one in which the system evolves, in order to be able to mix components in that space and then take the evolution steps accordingly (based on the state in that space). This additional, "coin," space is very often an internal degree of freedom like spin. We have introduced a general framework for construction of discrete-time quantum walks in a close analogy with the classical random walks with memory that is rather different from the standard "coin" approach. In this method there is no need to bring in a different degree of freedom, while the full state of the system is still described in the direct product of spaces (of states). The state can be thought of as an arrow pointing from the previous to the current site in the evolution, representing the one-step memory. The next step is then controlled by a single local operator assigned to each site in the space, acting quite like a scattering operator. This allows us to probe and solve some problems of interest that have not had successful approaches with "coined" walks. We construct and solve a walk on the binary tree, a structure of great interest but until our result without an explicit discrete time quantum walk, due to difficulties in managing coin spaces necessary in the standard approach. Beyond algorithmic interests, the model based on memory allows one to explore effects of history on the quantum evolution and the subtle emergence of classical features as "memory" is explicitly kept for additional steps. We construct and solve a walk with an additional correlation step, finding interesting new features. On the other hand, the fact that the evolution is driven entirely by a local operator, not involving additional spaces, enables us to choose the Fourier transform as an operator completely controlling the evolution. This in turn allows us to combine the quantum walk approach with Fourier transform based techniques, something decidedly not possible in classical computational physics. We are developing a formalism for building networks manageable by walks constructed with this framework, based on the surprising efficiency of our framework in discovering internals of a simple network that we so far solved. Finally, in line with our expectation that the field of quantum walks can take cues from the rich history of development of the classical stochastic techniques, we establish starting points for the work on non-Abelian quantum walks, with a particular quantum-walk analog of the classical "card shuffling," the walk on the permutation group. In summary, this thesis presents a new framework for construction of discrete time quantum walks, employing and exploring memoried nature of unitary evolution. It is applied to fully solving the problems of: A walk on the binary tree and exploration of the quantum-to-classical transition with increased correlation length (history). It is then used for simple network discovery, and to lay the groundwork for analysis of complex networks, based on combined power of efficient exploration of the Hilbert space (as a walk mixing components) and Fourier transformation (since we can choose this for the evolution operator). We hope to establish this as a general technique as its power would be unmatched by any approaches available in the classical computing. We also looked at the promising and challenging prospect of walks on non-Abelian structures by setting up the problem of "quantum card shuffling," a quantum walk on the permutation group. Relation to other work is thoroughly discussed throughout, along with examination of the context of our work and overviews of our current and future work.

  17. Introduction of blended learning in a master program: Developing an integrative mixed method evaluation framework.

    PubMed

    Chmiel, Aviva S; Shaha, Maya; Schneider, Daniel K

    2017-01-01

    The aim of this research is to develop a comprehensive evaluation framework involving all actors in a higher education blended learning (BL) program. BL evaluation usually either focuses on students, faculty, technological or institutional aspects. Currently, no validated comprehensive monitoring tool exists that can support introduction and further implementation of BL in a higher education context. Starting from established evaluation principles and standards, concepts that were to be evaluated were firstly identified and grouped. In a second step, related BL evaluation tools referring to students, faculty and institutional level were selected. This allowed setting up and implementing an evaluation framework to monitor the introduction of BL during two succeeding recurrences of the program. The results of the evaluation allowed documenting strengths and weaknesses of the BL format in a comprehensive way, involving all actors. It has led to improvements at program, faculty and course level. The evaluation process and the reporting of the results proved to be demanding in time and personal resources. The evaluation framework allows measuring the most significant dimensions influencing the success of a BL implementation at program level. However, this comprehensive evaluation is resource intensive. Further steps will be to refine the framework towards a sustainable and transferable BL monitoring tool that finds a balance between comprehensiveness and efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Treatment of severe radial club hand by distraction using an articulated mini-rail fixator and transfixing pins.

    PubMed

    Romana, C; Ciais, G; Fitoussi, F

    2015-06-01

    Treatment of severe radial club hand is difficult. Several authors have emphasized the importance of preliminary soft-tissue distraction before centralization. Treatment of severe radial club hand by articulated mini-rail allowing prior soft-tissue distraction improves results. Thirteen patients were treated sequentially, with an initial step of distraction and a second step of centralization. The first step consisted in fitting 2 mini-fixators, one in the concavity and the other in the convexity of the deformity. Four transfixing wires through the ulna and metacarpal bone connected the 2 fixators. After this preliminary distraction, the fixator was removed and a centralization wire was introduced percutaneously, with ulnar osteotomy if necessary. Sagittal and coronal correction was measured on the angle between forearm and hand. Mean age at treatment was 37.5 months (range, 9-120 months). Mean distraction time was 53.2 days (26-90 days). Ulnar osteotomy was required in 8 cases (61%). There were no major complications requiring interruption of distraction. Sagittal and coronal correction after centralization reduced mean residual forearm/hand angulation to<12°. Soft-tissue distraction in the concavity ahead of centralization is essential to good correction, avoiding extensive soft-tissue release and hyperpressure on the distal ulnar growth plate. There have been several studies of distraction; the present technique, associating 2 mini-fixators connected by threaded K-wires, provided sufficient distraction in the concavity of the deformity to allow satisfactory correction in all cases. Subsequent complications (breakage or displacement of the centralization wires) testify to the complexity of long-term management. The present study confirms the interest of a preliminary soft-tissue distraction step in treating severe radial club hand. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  19. GeoMapApp Learning Activities: Enabling the democratisation of geoscience learning

    NASA Astrophysics Data System (ADS)

    Goodwillie, A. M.; Kluge, S.

    2011-12-01

    GeoMapApp Learning Activities (http://serc.carleton.edu/geomapapp) are step-by-step guided inquiry geoscience education activities that enable students to dictate the pace of learning. They can be used in the classroom or out of class, and their guided nature means that the requirement for teacher intervention is minimised which allows students to spend increased time analysing and understanding a broad range of geoscience data, content and concepts. Based upon GeoMapApp (http://www.geomapapp.org), a free, easy-to-use map-based data exploration and visualisation tool, each activity furnishes the educator with an efficient package of downloadable documents. This includes step-by-step student instructions and answer sheet; a teacher's edition annotated worksheet containing teaching tips, additional content and suggestions for further work; quizzes for use before and after the activity to assess learning; and a multimedia tutorial. The activities can be used by anyone at any time in any place with an internet connection. In essence, GeoMapApp Learning Activities provide students with cutting-edge technology, research-quality geoscience data sets, and inquiry-based learning in a virtual lab-like environment. Examples of activities so far created are student calculation and analysis of the rate of seafloor spreading, and present-day evidence on the seafloor for huge ancient landslides around the Hawaiian islands. The activities are designed primarily for students at the community college, high school and introductory undergraduate levels, exposing students to content and concepts typically found in those settings.

  20. jInv: A Modular and Scalable Framework for Electromagnetic Inverse Problems

    NASA Astrophysics Data System (ADS)

    Belliveau, P. T.; Haber, E.

    2016-12-01

    Inversion is a key tool in the interpretation of geophysical electromagnetic (EM) data. Three-dimensional (3D) EM inversion is very computationally expensive and practical software for inverting large 3D EM surveys must be able to take advantage of high performance computing (HPC) resources. It has traditionally been difficult to achieve those goals in a high level dynamic programming environment that allows rapid development and testing of new algorithms, which is important in a research setting. With those goals in mind, we have developed jInv, a framework for PDE constrained parameter estimation problems. jInv provides optimization and regularization routines, a framework for user defined forward problems, and interfaces to several direct and iterative solvers for sparse linear systems. The forward modeling framework provides finite volume discretizations of differential operators on rectangular tensor product meshes and tetrahedral unstructured meshes that can be used to easily construct forward modeling and sensitivity routines for forward problems described by partial differential equations. jInv is written in the emerging programming language Julia. Julia is a dynamic language targeted at the computational science community with a focus on high performance and native support for parallel programming. We have developed frequency and time-domain EM forward modeling and sensitivity routines for jInv. We will illustrate its capabilities and performance with two synthetic time-domain EM inversion examples. First, in airborne surveys, which use many sources, we achieve distributed memory parallelism by decoupling the forward and inverse meshes and performing forward modeling for each source on small, locally refined meshes. Secondly, we invert grounded source time-domain data from a gradient array style induced polarization survey using a novel time-stepping technique that allows us to compute data from different time-steps in parallel. These examples both show that it is possible to invert large scale 3D time-domain EM datasets within a modular, extensible framework written in a high-level, easy to use programming language.

  1. Relating interesting quantitative time series patterns with text events and text features

    NASA Astrophysics Data System (ADS)

    Wanner, Franz; Schreck, Tobias; Jentner, Wolfgang; Sharalieva, Lyubka; Keim, Daniel A.

    2013-12-01

    In many application areas, the key to successful data analysis is the integrated analysis of heterogeneous data. One example is the financial domain, where time-dependent and highly frequent quantitative data (e.g., trading volume and price information) and textual data (e.g., economic and political news reports) need to be considered jointly. Data analysis tools need to support an integrated analysis, which allows studying the relationships between textual news documents and quantitative properties of the stock market price series. In this paper, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which reflect quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a-priori method. First, based on heuristics we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a-priori method supports the discovery of such sequential temporal patterns. Then, various text features like the degree of sentence nesting, noun phrase complexity, the vocabulary richness, etc. are extracted from the news to obtain meta patterns. Meta patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time-, cluster- and sequence visualization and analysis functionality. We provide two case studies, showing the effectiveness of our combined quantitative and textual analysis work flow. The workflow can also be generalized to other application domains such as data analysis of smart grids, cyber physical systems or the security of critical infrastructure, where the data consists of a combination of quantitative and textual time series data.

  2. Software Comparison for Renewable Energy Deployment in a Distribution Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian

    The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less

  3. Programmable bioelectronics in a stimuli-encoded 3D graphene interface.

    PubMed

    Parlak, Onur; Beyazit, Selim; Tse-Sum-Bui, Bernadette; Haupt, Karsten; Turner, Anthony P F; Tiwari, Ashutosh

    2016-05-21

    The ability to program and mimic the dynamic microenvironment of living organisms is a crucial step towards the engineering of advanced bioelectronics. Here, we report for the first time a design for programmable bioelectronics, with 'built-in' switchable and tunable bio-catalytic performance that responds simultaneously to appropriate stimuli. The designed bio-electrodes comprise light and temperature responsive compartments, which allow the building of Boolean logic gates (i.e."OR" and "AND") based on enzymatic communications to deliver logic operations.

  4. Safe Maneuvering Envelope Estimation Based on a Physical Approach

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas J. J.; Schuet, Stefan R.; Wheeler, Kevin R.; Acosta, Diana; Kaneshige, John T.

    2013-01-01

    This paper discusses a computationally efficient algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. This approach differs from others since it is physically inspired. This more transparent approach allows interpreting data in each step, and it is assumed that these physical models based upon flight dynamics theory will therefore facilitate certification for future real life applications.

  5. Experimental Validation of a Time-Accurate Finite Element Model for Coupled Multibody Dynamics and Liquid Sloshing

    DTIC Science & Technology

    2007-04-16

    velocity of the fluid mesh, P is the relative pressure, xr is the position vector, τ is the deviatoric stress tensor, D is the rate of deformation...corresponds to a slip factor of zero. The slip factor determines how much of the fluid and structure forces are mutually exchanged. Equations 22 and 23...updated from last to first. viii.Average the fluid pressure (This step eliminates the pressure checker-boarding effect and allows use of equal

  6. Analysis of Partitioned Methods for the Biot System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bukac, Martina; Layton, William; Moraiti, Marina

    2015-02-18

    In this work, we present a comprehensive study of several partitioned methods for the coupling of flow and mechanics. We derive energy estimates for each method for the fully-discrete problem. We write the obtained stability conditions in terms of a key control parameter defined as a ratio of the coupling strength and the speed of propagation. Depending on the parameters in the problem, give the choice of the partitioned method which allows the largest time step. (C) 2015 Wiley Periodicals, Inc.

  7. Dynamic Flight Maneuvering Using Virtual Control Surfaces Generated by Trapped Vorticity

    DTIC Science & Technology

    2010-12-01

    of a modified Dragon Eye UAV. These tests illustrated the possibility of controlled flight using open-loop flow control actuators. Future research...2 -1 0 1 2 z ( cm ) 0 1 2 3 4 5 1 2 3 4 5 Time (s)  (d eg ) Figure II-1 Step command tracking in plung: ideal reference model response...experimental results. The experimental results were obtained with the ball screws locked in position so that the wing model was only allowed to pitch

  8. Noninferiority, randomized, controlled trial comparing embryo development using media developed for sequential or undisturbed culture in a time-lapse setup.

    PubMed

    Hardarson, Thorir; Bungum, Mona; Conaghan, Joe; Meintjes, Marius; Chantilis, Samuel J; Molnar, Laszlo; Gunnarsson, Kristina; Wikland, Matts

    2015-12-01

    To study whether a culture medium that allows undisturbed culture supports human embryo development to the blastocyst stage equivalently to a well-established sequential media. Randomized, double-blinded sibling trial. Independent in vitro fertilization (IVF) clinics. One hundred twenty-eight patients, with 1,356 zygotes randomized into two study arms. Embryos randomly allocated into two study arms to compare embryo development on a time-lapse system using a single-step medium or sequential media. Percentage of good-quality blastocysts on day 5. Percentage of day 5 good-quality blastocysts was 21.1% (standard deviation [SD] ± 21.6%) and 22.2% (SD ± 22.1%) in the single-step time-lapse medium (G-TL) and the sequential media (G-1/G-2) groups, respectively. The mean difference (-1.2; 95% CI, -6.0; 3.6) between the two media systems for the primary end point was less than the noninferiority margin of -8%. There was a statistically significantly lower number of good-quality embryos on day 3 in the G-TL group [50.7% (SD ± 30.6%) vs. 60.8% (SD ± 30.7%)]. Four out of the 11 measured morphokinetic parameters were statistically significantly different for the two media used. The mean levels of ammonium concentration in the media at the end of the culture period was statistically significantly lower in the G-TL group as compared with the G-2 group. We have shown that a single-step culture medium supports blastocyst development equivalently to established sequential media. The ammonium concentrations were lower in the single-step media, and the measured morphokinetic parameters were modified somewhat. NCT01939626. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Reliability and validity of a smartphone-based assessment of gait parameters across walking speed and smartphone locations: Body, bag, belt, hand, and pocket.

    PubMed

    Silsupadol, Patima; Teja, Kunlanan; Lugade, Vipul

    2017-10-01

    The assessment of spatiotemporal gait parameters is a useful clinical indicator of health status. Unfortunately, most assessment tools require controlled laboratory environments which can be expensive and time consuming. As smartphones with embedded sensors are becoming ubiquitous, this technology can provide a cost-effective, easily deployable method for assessing gait. Therefore, the purpose of this study was to assess the reliability and validity of a smartphone-based accelerometer in quantifying spatiotemporal gait parameters when attached to the body or in a bag, belt, hand, and pocket. Thirty-four healthy adults were asked to walk at self-selected comfortable, slow, and fast speeds over a 10-m walkway while carrying a smartphone. Step length, step time, gait velocity, and cadence were computed from smartphone-based accelerometers and validated with GAITRite. Across all walking speeds, smartphone data had excellent reliability (ICC 2,1 ≥0.90) for the body and belt locations, with bag, hand, and pocket locations having good to excellent reliability (ICC 2,1 ≥0.69). Correlations between the smartphone-based and GAITRite-based systems were very high for the body (r=0.89, 0.98, 0.96, and 0.87 for step length, step time, gait velocity, and cadence, respectively). Similarly, Bland-Altman analysis demonstrated that the bias approached zero, particularly in the body, bag, and belt conditions under comfortable and fast speeds. Thus, smartphone-based assessments of gait are most valid when placed on the body, in a bag, or on a belt. The use of a smartphone to assess gait can provide relevant data to clinicians without encumbering the user and allow for data collection in the free-living environment. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Modeling myosin VI stepping dynamics

    NASA Astrophysics Data System (ADS)

    Tehver, Riina

    Myosin VI is a molecular motor that transports intracellular cargo as well as acts as an anchor. The motor has been measured to have unusually large step size variation and it has been reported to make both long forward and short inchworm-like forward steps, as well as step backwards. We have been developing a model that incorporates this diverse stepping behavior in a consistent framework. Our model allows us to predict the dynamics of the motor under different conditions and investigate the evolutionary advantages of the large step size variation.

  11. Synthetic range profiling in ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Kaczmarek, Pawel; Lapiński, Marian; Silko, Dariusz

    2009-06-01

    The paper describes stepped frequency continuous wave (SFCW) ground penetrating radar (GPR), where signal's frequency is discretely increased in N linear steps, each separated by a fixed ▵f increment from the previous one. SFCW radar determines distance from phase shift in a reflected signal, by constructing synthetic range profile in spatial time domain using the IFFT. Each quadrature sample is termed a range bin, as it represents the signal from a range window of length cτ/2, where τ is duration of single frequency segment. IFFT of those data samples resolves the range bin in into fine range bins of c/2N▵f width, thus creating the synthetic range profile in a GPR - a time domain approximation of the frequency response of a combination of the medium through which electromagnetic waves propagates (soil) and any targets or dielectric interfaces (water, air, other types of soil) present in the beam width of the radar. In the paper, certain practical measurements done by a monostatic SFCW GPR were presented. Due to complex nature of signal source, E5062A VNA made by Agilent was used as a signal generator, allowing number of frequency steps N to go as high as 1601, with generated frequency ranging from 300kHz to 3 GHz.

  12. A robot and control algorithm that can synchronously assist in naturalistic motion during body-weight-supported gait training following neurologic injury.

    PubMed

    Aoyagi, Daisuke; Ichinose, Wade E; Harkema, Susan J; Reinkensmeyer, David J; Bobrow, James E

    2007-09-01

    Locomotor training using body weight support on a treadmill and manual assistance is a promising rehabilitation technique following neurological injuries, such as spinal cord injury (SCI) and stroke. Previous robots that automate this technique impose constraints on naturalistic walking due to their kinematic structure, and are typically operated in a stiff mode, limiting the ability of the patient or human trainer to influence the stepping pattern. We developed a pneumatic gait training robot that allows for a full range of natural motion of the legs and pelvis during treadmill walking, and provides compliant assistance. However, we observed an unexpected consequence of the device's compliance: unimpaired and SCI individuals invariably began walking out-of-phase with the device. Thus, the robot perturbed rather than assisted stepping. To address this problem, we developed a novel algorithm that synchronizes the device in real-time to the actual motion of the individual by sensing the state error and adjusting the replay timing to reduce this error. This paper describes data from experiments with individuals with SCI that demonstrate the effectiveness of the synchronization algorithm, and the potential of the device for relieving the trainers of strenuous work while maintaining naturalistic stepping.

  13. Automated sample preparation using membrane microtiter extraction for bioanalytical mass spectrometry.

    PubMed

    Janiszewski, J; Schneider, P; Hoffmaster, K; Swyden, M; Wells, D; Fouda, H

    1997-01-01

    The development and application of membrane solid phase extraction (SPE) in 96-well microtiter plate format is described for the automated analysis of drugs in biological fluids. The small bed volume of the membrane allows elution of the analyte in a very small solvent volume, permitting direct HPLC injection and negating the need for the time consuming solvent evaporation step. A programmable liquid handling station (Quadra 96) was modified to automate all SPE steps. To avoid drying of the SPE bed and to enhance the analytical precision a novel protocol for performing the condition, load and wash steps in rapid succession was utilized. A block of 96 samples can now be extracted in 10 min., about 30 times faster than manual solvent extraction or single cartridge SPE methods. This processing speed complements the high-throughput speed of contemporary high performance liquid chromatography mass spectrometry (HPLC/MS) analysis. The quantitative analysis of a test analyte (Ziprasidone) in plasma demonstrates the utility and throughput of membrane SPE in combination with HPLC/MS. The results obtained with the current automated procedure compare favorably with those obtained using solvent and traditional solid phase extraction methods. The method has been used for the analysis of numerous drug prototypes in biological fluids to support drug discovery efforts.

  14. Microwave pyrolysis using self-generated pyrolysis gas as activating agent: An innovative single-step approach to convert waste palm shell into activated carbon

    NASA Astrophysics Data System (ADS)

    Yek, Peter Nai Yuh; Keey Liew, Rock; Shahril Osman, Mohammad; Chung Wong, Chee; Lam, Su Shiung

    2017-11-01

    Waste palm shell (WPS) is a biomass residue largely available from palm oil industries. An innovative microwave pyrolysis method was developed to produce biochar from WPS while the pyrolysis gas generated as another product is simultaneously used as activating agent to transform the biochar into waste palm shell activated carbon (WPSAC), thus allowing carbonization and activation to be performed simultaneously in a single-step approach. The pyrolysis method was investigated over a range of process temperature and feedstock amount with emphasis on the yield and composition of the WPSAC obtained. The WPSAC was tested as dye adsorbent in removing methylene blue. This pyrolysis approach provided a fast heating rate (37.5°/min) and short process time (20 min) in transforming WPS into WPSAC, recording a product yield of 40 wt%. The WPSAC was detected with high BET surface area (≥ 1200 m2/g), low ash content (< 5 wt%), and high pore volume (≥ 0.54 cm3/g), thus recording high adsorption efficiency of 440 mg of dye/g. The desirable process features (fast heating rate, short process time) and the recovery of WPSAC suggest the exceptional promise of the single-step microwave pyrolysis approach to produce high-grade WPSAC from WPS.

  15. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Robin Ivey; Balestra, Paolo; Strydom, Gerhard

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it usingmore » the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings are very model-specific and cannot be generalized to other PHISICS/RELAP5-3D models.« less

  16. 2D-DIGE in Proteomics.

    PubMed

    Pasquali, Matias; Serchi, Tommaso; Planchon, Sebastien; Renaut, Jenny

    2017-01-01

    The two-dimensional difference gel electrophoresis method is a valuable approach for proteomics. The method, using cyanine fluorescent dyes, allows the co-migration of multiple protein samples in the same gel and their simultaneous detection, thus reducing experimental and analytical time. 2D-DIGE, compared to traditional post-staining 2D-PAGE protocols (e.g., colloidal Coomassie or silver nitrate), provides faster and more reliable gel matching, limiting the impact of gel to gel variation, and allows also a good dynamic range for quantitative comparisons. By the use of internal standards, it is possible to normalize for experimental variations in spot intensities and gel patterns. Here we describe the experimental steps we follow in our routine 2D-DIGE procedure that we then apply to multiple biological questions.

  17. Method of Simulating Flow-Through Area of a Pressure Regulator

    NASA Technical Reports Server (NTRS)

    Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)

    2011-01-01

    The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.

  18. Burst switching without guard interval in all-optical software-define star intra-data center network

    NASA Astrophysics Data System (ADS)

    Ji, Philip N.; Wang, Ting

    2014-02-01

    Optical switching has been introduced in intra-data center networks (DCNs) to increase capacity and to reduce power consumption. Recently we proposed a star MIMO OFDM-based all-optical DCN with burst switching and software-defined networking. Here, we introduce the control procedure for the star DCN in detail for the first time. The timing, signaling, and operation are described for each step to achieve efficient bandwidth resource utilization. Furthermore, the guidelines for the burst assembling period selection that allows burst switching without guard interval are discussed. The star all-optical DCN offers flexible and efficient control for next-generation data center application.

  19. On the efficient and reliable numerical solution of rate-and-state friction problems

    NASA Astrophysics Data System (ADS)

    Pipping, Elias; Kornhuber, Ralf; Rosenau, Matthias; Oncken, Onno

    2016-03-01

    We present a mathematically consistent numerical algorithm for the simulation of earthquake rupture with rate-and-state friction. Its main features are adaptive time stepping, a novel algebraic solution algorithm involving nonlinear multigrid and a fixed point iteration for the rate-and-state decoupling. The algorithm is applied to a laboratory scale subduction zone which allows us to compare our simulations with experimental results. Using physical parameters from the experiment, we find a good fit of recurrence time of slip events as well as their rupture width and peak slip. Computations in 3-D confirm efficiency and robustness of our algorithm.

  20. A cellular automata approach for modeling surface water runoff

    NASA Astrophysics Data System (ADS)

    Jozefik, Zoltan; Nanu Frechen, Tobias; Hinz, Christoph; Schmidt, Heiko

    2015-04-01

    This abstract reports the development and application of a two-dimensional cellular automata based model, which couples the dynamics of overland flow, infiltration processes and surface evolution through sediment transport. The natural hill slopes are represented by their topographic elevation and spatially varying soil properties infiltration rates and surface roughness coefficients. This model allows modeling of Hortonian overland flow and infiltration during complex rainfall events. An advantage of the cellular automata approach over the kinematic wave equations is that wet/dry interfaces that often appear with rainfall overland flows can be accurately captured and are not a source of numerical instabilities. An adaptive explicit time stepping scheme allows for rainfall events to be adequately resolved in time, while large time steps are taken during dry periods to provide for simulation run time efficiency. The time step is constrained by the CFL condition and mass conservation considerations. The spatial discretization is shown to be first-order accurate. For validation purposes, hydrographs for non-infiltrating and infiltrating plates are compared to the kinematic wave analytic solutions and data taken from literature [1,2]. Results show that our cellular automata model quantitatively accurately reproduces hydrograph patterns. However, recent works have showed that even through the hydrograph is satisfyingly reproduced, the flow field within the plot might be inaccurate [3]. For a more stringent validation, we compare steady state velocity, water flux, and water depth fields to rainfall simulation experiments conducted in Thies, Senegal [3]. Comparisons show that our model is able to accurately capture these flow properties. Currently, a sediment transport and deposition module is being implemented and tested. [1] M. Rousseau, O. Cerdan, O. Delestre, F. Dupros, F. James, S. Cordier. Overland flow modeling with the Shallow Water Equation using a well balanced numerical scheme: Adding efficiency or sum more complexity?. 2012. [2] Fritz R. Fiedler, J. A. Ramirez. A numerical method for simulating discontinuous shallow flow over an infiltrating surface. In. J. Numer. Mech. Fluids 200: 32: 219-240. [3] C. Mügler, O. Planchon, J. Patin, S. Weill, N. Silvera, P. Richard, E. Mouche. Comparison of Roughness models to simulate overland flow and tracer transport experiments under simulated rainfall at plot scale. Journal of Hydrology. 402 (2011) 25-40.

  1. The continuous assembly and transfer of nanoelements

    NASA Astrophysics Data System (ADS)

    Kumar, Arun

    Patterned nanoelements on flexible polymeric substrates at micro/nano scale at high rate, low cost, and commercially viable route offer an opportunity for manufacturing devices with micro/nano scale features. These micro/nano scale now made with various nanoelement can enhance the device functionality in sensing and switching due to their improved conductivity and better mechanical properties. In this research the fundamental understanding of high rate assembly and transfer of nanoelements has been developed. To achieve this objective, three sub topics were made. In the first step, the use of electrophoresis for the controlled assembly of CNT's on interdigitated templates has been shown. The time scale of assembly reported is shorter than the previously reported assembly time (60 seconds). The mass deposited was also predicted using the Hamaker's law. It is also shown that pre-patterned CNT's could be transferred from the rigid templates onto flexible polymeric substrates using a thermoforming process. The time scale of transfer is less than one minute (50 seconds) and was found to be dependent on polymer chemistry. It was found that CNT's preferentially transfer from Au electrode to non-polar polymeric substrates (polyurethane and polyethylene terephalathate glycol) in the thermoforming process. In the second step, a novel process (Pulsed Electrophoresis) has been shown for the first time to assist the assembly of conducting polyaniline on gold nanowire interdigitated templates. This technique offers dynamic control over heat build-up, which has been a main drawback in the DC electrophoresis and AC dielectrophoresis as well as the main cause of nanowire template damage. The use of this technique allowed higher voltages to be applied, resulting in shorter assembly times (e.g., 17.4 seconds, assembly resolution of 100 nm). The pre-patterned templates with PANi deposition were subsequently used to transfer the nanoscale assembled PANi from the rigid templates to thermoplastic polyurethane using the thermoforming process. In the third step, a novel integration of high rate pulsed electrophoretic assembly with thermally assisted transfer in a roll-to-roll process has been shown. This technique allowed the whole assembly and transfer process to take place in only 30 seconds. Further, a processing window is developed to control the percent area coverage of PANi with the aid of the belt speed. Also shown is the effect of different types of polymer on the quality of transfer, and it concluded that the transfer is affected by the polymer chemistry.

  2. Remote sensing of desert dust aerosols over the Sahel : potential use for health impact studies

    NASA Astrophysics Data System (ADS)

    Deroubaix, A. D.; Martiny, N. M.; Chiapello, I. C.; Marticorena, B. M.

    2012-04-01

    Since the end of the 70's, remote sensing monitors the desert dust aerosols due to their absorption and scattering properties and allows to make long time series which are necessary for air quality or health impact studies. In the Sahel, a huge health problem is the Meningitis Meningococcal (MM) epidemics that occur during the dry season : the dust has been suspected to be crucial to understand their onsets and dynamics. The Aerosol absorption Index (AI) is a semi-quantitative index derived from TOMS and OMI observations in the UV available at a spatial resolution of 1° (1979-2005) and 0.25° (2005-today) respectively. The comparison of the OMI-AI and AERONET Aerosol Optical thickness (AOT) shows a good agreement at a daily time-step (correlation ~0.7). The comparison of the OMI-AI with the Particle Matter (PM) measurement of the Sahelian Dust Transect is lower (~0.4) at a daily time-step but it increases at a weekly time-step (~0.6). The OMI-AI reproduces the dust seasonal cycle over the Sahel and we conclude that the OMI-AI product at a 0.25° spatial resolution is suitable for health impact studies, especially at a weekly epidemiological time-step. Despite the AI is sensitive to the aerosol altitude, it provides a daily spatial information on dust. A preliminary investigation analysis of the link between weekly OMI AI and weekly WHO epidemiological data sets is presented in Mali and Niger, showing a good agreement between the AI and the onset of the MM epidemics with a constant lag (between 1 and 2 week). The next of this study is to analyse a deeper AI time series constituted by TOMS and OMI data sets. Based on the weekly ratios PM/AI at 2 stations of the Sahelian Dust Transect, a spatialized proxy for PM from the AI has been developed. The AI as a proxy for PM and other climate variables such as Temperature (T°), Relative Humidity (RH%) and the wind (intensity and direction) could then be used to analyze the link between those variables and the MM epidemics in the most concerned countries in Western Africa, which would be an important step towards a forecasting tool for the epidemics risks in Western Africa.

  3. Magnetic resonance imaging-transectal ultrasound image-fusion biopsies accurately characterize the index tumor: correlation with step-sectioned radical prostatectomy specimens in 135 patients.

    PubMed

    Baco, Eduard; Ukimura, Osamu; Rud, Erik; Vlatkovic, Ljiljana; Svindland, Aud; Aron, Manju; Palmer, Suzanne; Matsugasumi, Toru; Marien, Arnaud; Bernhard, Jean-Christophe; Rewcastle, John C; Eggesbø, Heidi B; Gill, Inderbir S

    2015-04-01

    Prostate biopsies targeted by elastic fusion of magnetic resonance (MR) and three-dimensional (3D) transrectal ultrasound (TRUS) images may allow accurate identification of the index tumor (IT), defined as the lesion with the highest Gleason score or the largest volume or extraprostatic extension. To determine the accuracy of MR-TRUS image-fusion biopsy in characterizing ITs, as confirmed by correlation with step-sectioned radical prostatectomy (RP) specimens. Retrospective analysis of 135 consecutive patients who sequentially underwent pre-biopsy MR, MR-TRUS image-fusion biopsy, and robotic RP at two centers between January 2010 and September 2013. Image-guided biopsies of MR-suspected IT lesions were performed with tracking via real-time 3D TRUS. The largest geographically distinct cancer focus (IT lesion) was independently registered on step-sectioned RP specimens. A validated schema comprising 27 regions of interest was used to identify the IT center location on MR images and in RP specimens, as well as the location of the midpoint of the biopsy trajectory, and variables were correlated. The concordance between IT location on biopsy and RP specimens was 95% (128/135). The coefficient for correlation between IT volume on MRI and histology was r=0.663 (p<0.001). The maximum cancer core length on biopsy was weakly correlated with RP tumor volume (r=0.466, p<0.001). The concordance of primary Gleason pattern between targeted biopsy and RP specimens was 90% (115/128; κ=0.76). The study limitations include retrospective evaluation of a selected patient population, which limits the generalizability of the results. Use of MR-TRUS image fusion to guide prostate biopsies reliably identified the location and primary Gleason pattern of the IT lesion in >90% of patients, but showed limited ability to predict cancer volume, as confirmed by step-sectioned RP specimens. Biopsies targeted using magnetic resonance images combined with real-time three-dimensional transrectal ultrasound allowed us to reliably identify the spatial location of the most important tumor in prostate cancer and characterize its aggressiveness. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  4. Integration Of Space Weather Into Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Reeves, G.

    2010-09-01

    Rapid assessment of space weather effects on satellites is a critical step in anomaly resolution and satellite threat assessment. That step, however, is often hindered by a number of factors including timely collection and delivery of space weather data and the inherent complexity of space weather information. As part of a larger, integrated space situational awareness program, Los Alamos National Laboratory has developed prototype operational space weather tools that run in real time and present operators with customized, user-specific information. The Dynamic Radiation Environment Assimilation Model (DREAM) focuses on the penetrating radiation environment from natural or nuclear-produced radiation belts. The penetrating radiation environment is highly dynamic and highly orbitdependent. Operators often must rely only on line plots of 2 MeV electron flux from the NOAA geosynchronous GOES satellites which is then assumed to be representative of the environment at the satellite of interest. DREAM uses data assimilation to produce a global, real-time, energy dependent specification. User tools are built around a distributed service oriented architecture (SOA) which allows operators to select any satellite from the space catalog and examine the environment for that specific satellite and time of interest. Depending on the application operators may need to examine instantaneous dose rates and/or dose accumulated over various lengths of time. Further, different energy thresholds can be selected depending on the shielding on the satellite or instrument of interest. In order to rapidly assess the probability that space weather effects, the current conditions can be compared against the historical distribution of radiation levels for that orbit. In the simplest operation a user would select a satellite and time of interest and immediately see if the environmental conditions were typical, elevated, or extreme based on how often those conditions occur in that orbit. This allows users to rapidly rule in or out environmental causes of anomalies. The same user interface can also allow users to drill down for more detailed quantitative information. DREAM can be run either from a distributed web-based user interface or as a stand-alone application for secure operations. We will discuss the underlying structure of the DREAM model and demonstrate the user interface that we have developed. We will also discuss future development plans for DREAM and how the same paradigm can be applied to integrating other space environment information into operational SSA systems.

  5. Working through. A process of restitution.

    PubMed

    Gottesman, D M

    A number of authors, including Freud, have written about the process of working through but have left unsettled what is actually involved. I have attempted to outline the step-by-step process of working through, starting with recollection and repetition and ending with restitution and resolution. I have introduced the term restitution in order to give more importance to an already existing step in the working-throught process; it should not be looked upon as an artificial device. Restitution allows the patient to find appropriate gratification in present reality, and this helps him to relinquish the past. Rather than allowing the patient to "wallow in the muck of guilt," as Eveoleen Rexford suggests society "wallows" in its inability to help its children, restitution gives appropriate direction for change. It is a natural step in the successful resolution of treatment.

  6. Creating the Infrastructure for Rapid Application Development and Processing Response to the HIRDLS Radiance Anomaly

    NASA Astrophysics Data System (ADS)

    Cavanaugh, C.; Gille, J.; Francis, G.; Nardi, B.; Hannigan, J.; McInerney, J.; Krinsky, C.; Barnett, J.; Dean, V.; Craig, C.

    2005-12-01

    The High Resolution Dynamics Limb Sounder (HIRDLS) instrument onboard the NASA Aura spacecraft experienced a rupture of the thermal blanketing material (Kapton) during the rapid depressurization of launch. The Kapton draped over the HIRDLS scan mirror, severely limiting the aperture through which HIRDLS views space and Earth's atmospheric limb. In order for HIRDLS to achieve its intended measurement goals, rapid characterization of the anomaly, and rapid recovery from it were required. The recovery centered around a new processing module inserted into the standard HIRDLS processing scheme, with a goal of minimizing the effect of the anomaly on the already existing processing modules. We describe the software infrastructure on which the new processing module was built, and how that infrastructure allows for rapid application development and processing response. The scope of the infrastructure spans three distinct anomaly recovery steps and the means for their intercommunication. Each of the three recovery steps (removing the Kapton-induced oscillation in the radiometric signal, removing the Kapton signal contamination upon the radiometric signal, and correcting for the partially-obscured atmospheric view) is completely modularized and insulated from the other steps, allowing focused and rapid application development towards a specific step, and neutralizing unintended inter-step influences, thus greatly shortening the design-development-test lifecycle. The intercommunication is also completely modularized and has a simple interface to which the three recovery steps adhere, allowing easy modification and replacement of specific recovery scenarios, thereby heightening the processing response.

  7. Method and apparatus for automated assembly

    DOEpatents

    Jones, Rondall E.; Wilson, Randall H.; Calton, Terri L.

    1999-01-01

    A process and apparatus generates a sequence of steps for assembly or disassembly of a mechanical system. Each step in the sequence is geometrically feasible, i.e., the part motions required are physically possible. Each step in the sequence is also constraint feasible, i.e., the step satisfies user-definable constraints. Constraints allow process and other such limitations, not usually represented in models of the completed mechanical system, to affect the sequence.

  8. MNE Scan: Software for real-time processing of electrophysiological data.

    PubMed

    Esch, Lorenz; Sun, Limin; Klüber, Viktor; Lew, Seok; Baumgarten, Daniel; Grant, P Ellen; Okada, Yoshio; Haueisen, Jens; Hämäläinen, Matti S; Dinh, Christoph

    2018-06-01

    Magnetoencephalography (MEG) and Electroencephalography (EEG) are noninvasive techniques to study the electrophysiological activity of the human brain. Thus, they are well suited for real-time monitoring and analysis of neuronal activity. Real-time MEG/EEG data processing allows adjustment of the stimuli to the subject's responses for optimizing the acquired information especially by providing dynamically changing displays to enable neurofeedback. We introduce MNE Scan, an acquisition and real-time analysis software based on the multipurpose software library MNE-CPP. MNE Scan allows the development and application of acquisition and novel real-time processing methods in both research and clinical studies. The MNE Scan development follows a strict software engineering process to enable approvals required for clinical software. We tested the performance of MNE Scan in several device-independent use cases, including, a clinical epilepsy study, real-time source estimation, and Brain Computer Interface (BCI) application. Compared to existing tools we propose a modular software considering clinical software requirements expected by certification authorities. At the same time the software is extendable and freely accessible. We conclude that MNE Scan is the first step in creating a device-independent open-source software to facilitate the transition from basic neuroscience research to both applied sciences and clinical applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald Martin

    The research studied one-step and two-step Isotope Separation on Line (ISOL) targets for future radioactive beam facilities with high driver-beam power through advanced computer simulations. As a target material uranium carbide in the form of foils was used because of increasing demand for actinide targets in rare-isotope beam facilities and because such material was under development in ISAC at TRIUMF when this project started. Simulations of effusion were performed for one-step and two step targets and the effects of target dimensions and foil matrix were studied. Diffusion simulations were limited by availability of diffusion parameters for UC x material atmore » reduced density; however, the viability of the combined diffusion?effusion simulation methodology was demonstrated and could be used to extract physical parameters such as diffusion coefficients and effusion delay times from experimental isotope release curves. Dissipation of the heat from the isotope-producing targets is the limiting factor for high-power beam operation both for the direct and two-step targets. Detailed target models were used to simulate proton beam interactions with the targets to obtain the fission rates and power deposition distributions, which were then applied in the heat transfer calculations to study the performance of the targets. Results indicate that a direct target, with specification matching ISAC TRIUMF target, could operate in 500-MeV proton beam at beam powers up to ~40 kW, producing ~8 10 13 fission/s with maximum temperature in UCx below 2200 C. Targets with larger radius allow higher beam powers and fission rates. For the target radius in the range 9-mm to 30-mm the achievable fission rate increases almost linearly with target radius, however, the effusion delay time also increases linearly with target radius.« less

  10. Ethnic and gender differences in physical activity levels among 9-10-year-old children of white European, South Asian and African-Caribbean origin: the Child Heart Health Study in England (CHASE Study).

    PubMed

    Owen, Christopher G; Nightingale, Claire M; Rudnicka, Alicja R; Cook, Derek G; Ekelund, Ulf; Whincup, Peter H

    2009-08-01

    Ethnic differences in physical activity in children in the UK have not been accurately assessed. We made objective measurements of physical activity in 9-10-year-old British children of South Asian, black African-Caribbean and white European origin. Cross-sectional study of urban primary school children (2006-07). Actigraph-GT1M activity monitors were worn by 2071 children during waking hours on at least 1 full day. Ethnic differences in mean daily activity [counts, counts per minute of registered time (CPM) and steps] were adjusted for age, gender, day of week and month. Multilevel modelling allowed for repeated days within individual and clustering within school. In white Europeans, mean daily counts, CPM and mean daily steps were 394,785, 498 and 10,220, respectively. South Asian and black Caribbean children recorded more registered time per day than white Europeans (34 and 36 min, respectively). Compared with white Europeans, South Asians recorded 18 789 fewer counts [95% confidence interval (CI) 6390-31 187], 41 fewer CPM 95% CI 26-57) and 905 fewer steps (95% CI 624-1187). Black African-Caribbeans recorded 25 359 more counts (95% CI 14 273-36 445), and similar CPM, but fewer steps than white Europeans. Girls recorded less activity than boys in all ethnic groups, with 74 782 fewer counts (95% CI 66 665-82 899), 84 fewer CPM (95% CI 74-95) and 1484 fewer steps (95% CI 1301-1668). British South Asian children have lower objectively measured physical activity levels than European whites and black African-Caribbeans.

  11. Part-time careers in academic internal medicine: a report from the association of specialty professors part-time careers task force on behalf of the alliance for academic internal medicine.

    PubMed

    Linzer, Mark; Warde, Carole; Alexander, R Wayne; Demarco, Deborah M; Haupt, Allison; Hicks, Leroi; Kutner, Jean; Mangione, Carol M; Mechaber, Hilit; Rentz, Meridith; Riley, Joanne; Schuster, Barbara; Solomon, Glen D; Volberding, Paul; Ibrahim, Tod

    2009-10-01

    To establish guidelines for more effectively incorporating part-time faculty into departments of internal medicine, a task force was convened in early 2007 by the Association of Specialty Professors. The task force used informal surveys, current literature, and consensus building among members of the Alliance for Academic Internal Medicine to produce a consensus statement and a series of recommendations. The task force agreed that part-time faculty could enrich a department of medicine, enhance workforce flexibility, and provide high-quality research, patient care, and education in a cost-effective manner. The task force provided a series of detailed steps for operationalizing part-time practice; to do so, key issues were addressed, such as fixed costs, malpractice insurance, space, cross-coverage, mentoring, career development, productivity targets, and flexible scheduling. Recommendations included (1) increasing respect for work-family balance, (2) allowing flexible time as well as part-time employment, (3) directly addressing negative perceptions about part-time faculty, (4) developing policies to allow flexibility in academic advancement, (5) considering part-time faculty as candidates for leadership positions, (6) encouraging granting agencies, including the National Institutes of Health and Veterans Administration, to consider part-time faculty as eligible for research career development awards, and (7) supporting future research in "best practices" for incorporating part-time faculty into academic departments of medicine.

  12. Adaptive macro finite elements for the numerical solution of monodomain equations in cardiac electrophysiology.

    PubMed

    Heidenreich, Elvio A; Ferrero, José M; Doblaré, Manuel; Rodríguez, José F

    2010-07-01

    Many problems in biology and engineering are governed by anisotropic reaction-diffusion equations with a very rapidly varying reaction term. This usually implies the use of very fine meshes and small time steps in order to accurately capture the propagating wave while avoiding the appearance of spurious oscillations in the wave front. This work develops a family of macro finite elements amenable for solving anisotropic reaction-diffusion equations with stiff reactive terms. The developed elements are incorporated on a semi-implicit algorithm based on operator splitting that includes adaptive time stepping for handling the stiff reactive term. A linear system is solved on each time step to update the transmembrane potential, whereas the remaining ordinary differential equations are solved uncoupled. The method allows solving the linear system on a coarser mesh thanks to the static condensation of the internal degrees of freedom (DOF) of the macroelements while maintaining the accuracy of the finer mesh. The method and algorithm have been implemented in parallel. The accuracy of the method has been tested on two- and three-dimensional examples demonstrating excellent behavior when compared to standard linear elements. The better performance and scalability of different macro finite elements against standard finite elements have been demonstrated in the simulation of a human heart and a heterogeneous two-dimensional problem with reentrant activity. Results have shown a reduction of up to four times in computational cost for the macro finite elements with respect to equivalent (same number of DOF) standard linear finite elements as well as good scalability properties.

  13. Intervention for Spanish Overweight Teenagers in Physical Education Lessons

    PubMed Central

    Martínez-López, Emilio J.; Grao-Cruces, Alberto; Moral-García, José E.; Pantoja-Vallejo, Antonio

    2012-01-01

    Physical education is a favourable educational framework for the development of programmes aimed at increasing physical activity in children and thus reducing sedentarism. The progressive increase of overweight students demands global control and follow-up measurement of these behaviours in both in and out of school. The pedometer can be a useful tool in this field. It is easy to use and allow Physical Education (PE) departments to quantify their students' number of steps/day. The aim of this study was to determine the effect of a pedometer intervention on body fat and BMI levels in overweight teenagers. Besides, the effects of the programme are analysed according to two other variables: pedometer ownership and gender, distinguishing between out-of-school and school hours, weekdays and weekends. The sample comprises 112 overweight students (49 boys and 63 girls) from 5 secondary schools. Participants were asked to follow a physical activity programme consisting on a minimum of 12000 and 10000 steps/day for boys and girls, respectively. It also allowed them to get up to 2 extra points in their PE marks. Results were measured after 6 weeks of programme application as well as after 6 weeks of retention. Results revealed significantly reduced BMI in the teenagers with their own pedometer (p < 0.05). The difference observed in the number of steps/day between boys (12050) and girls (9566) was significant in all measured time periods (p < 0.05). Besides, both overweight boys and girls were observed to take 1000 steps/day less at weekends than in weekdays. Therefore, it is concluded that the proposal of 12000 and 10000 steps for overweight boys and girls, respectively, accompanied by a reinforcement programme in their final PE marks, seems sufficient to obtain significant BMI reductions. Besides, PE is shown a favourable framework for the proposal of pedometer-impelled weight loss programmes in overweight youth. Key pointsA programme of 12000 and 10000 steps for overweight boys and girls, respectively with reinforcement in physical education marks, the body mass index improves.Body mass index more reduced was in Spanish adolescent overweight that used their own pedometer.The steps/day between boys (12050) and girls (9566) with overweight was different (p < 0.05).Overweight boys and girls were observed to take 1000 steps/day less at weekends than in weekdays.In physical education is possible to apply a programme of steps in obese youth of secondary education schools. PMID:24149205

  14. Methods, systems and devices for detecting and locating ferromagnetic objects

    DOEpatents

    Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID

    2010-01-26

    Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.

  15. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  16. Graphene-polymer hybrid nanostructure-based bioenergy storage device for real-time control of biological motor activity.

    PubMed

    Byun, Kyung-Eun; Choi, Dong Shin; Kim, Eunji; Seo, David H; Yang, Heejun; Seo, Sunae; Hong, Seunghun

    2011-11-22

    We report a graphene-polymer hybrid nanostructure-based bioenergy storage device to turn on and off biomotor activity in real-time. In this strategy, graphene was functionalized with amine groups and utilized as a transparent electrode supporting the motility of biomotors. Conducting polymer patterns doped with adenosine triphosphate (ATP) were fabricated on the graphene and utilized for the fast release of ATP by electrical stimuli through the graphene. The controlled release of biomotor fuel, ATP, allowed us to control the actin filament transportation propelled by the biomotor in real-time. This strategy should enable the integrated nanodevices for the real-time control of biological motors, which can be a significant stepping stone toward hybrid nanomechanical systems based on motor proteins. © 2011 American Chemical Society

  17. High-performance liquid chromatography with fast-scanning fluorescence detection and multivariate curve resolution for the efficient determination of galantamine and its main metabolites in serum.

    PubMed

    Culzoni, María J; Aucelio, Ricardo Q; Escandar, Graciela M

    2012-08-31

    Based on green analytical chemistry principles, an efficient approach was applied for the simultaneous determination of galantamine, a widely used cholinesterase inhibitor for the treatment of Alzheimer's disease, and its major metabolites in serum samples. After a simple serum deproteinization step, second-order data were rapidly obtained (less than 6 min) with a chromatographic system operating in the isocratic regime using ammonium acetate/acetonitrile (94:6) as mobile phase. Detection was made with a fast-scanning spectrofluorimeter, which allowed the efficient collection of data to obtain matrices of fluorescence intensity as a function of retention time and emission wavelength. Successful resolution was achieved in the presence of matrix interferences in serum samples using multivariate curve resolution-alternating least-squares (MCR-ALS). The developed approach allows the quantification of the analytes at levels found in treated patients, without the need of applying either preconcentration or extraction steps. Limits of detection in the range between 8 and 11 ng mL(-1), relative prediction errors from 7 to 12% and coefficients of variation from 4 to 7% were achieved. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. One-step fabrication of submicrostructures by low one-photon absorption direct laser writing technique with local thermal effect

    NASA Astrophysics Data System (ADS)

    Nguyen, Dam Thuy Trang; Tong, Quang Cong; Ledoux-Rak, Isabelle; Lai, Ngoc Diep

    2016-01-01

    In this work, local thermal effect induced by a continuous-wave laser has been investigated and exploited to optimize the low one-photon absorption (LOPA) direct laser writing (DLW) technique for fabrication of polymer-based microstructures. It was demonstrated that the temperature of excited SU8 photoresist at the focusing area increases to above 100 °C due to high excitation intensity and becomes stable at that temperature thanks to the use of a continuous-wave laser at 532 nm-wavelength. This optically induced thermal effect immediately completes the crosslinking process at the photopolymerized region, allowing obtain desired structures without using the conventional post-exposure bake (PEB) step, which is usually realized after the exposure. Theoretical calculation of the temperature distribution induced by local optical excitation using finite element method confirmed the experimental results. LOPA-based DLW technique combined with optically induced thermal effect (local PEB) shows great advantages over the traditional PEB, such as simple, short fabrication time, high resolution. In particular, it allowed the overcoming of the accumulation effect inherently existed in optical lithography by one-photon absorption process, resulting in small and uniform structures with very short lattice constant.

  19. Evaluating physical function and activity in the elderly patient using wearable motion sensors.

    PubMed

    Grimm, Bernd; Bolink, Stijn

    2016-05-01

    Wearable sensors, in particular inertial measurement units (IMUs) allow the objective, valid, discriminative and responsive assessment of physical function during functional tests such as gait, stair climbing or sit-to-stand.Applied to various body segments, precise capture of time-to-task achievement, spatiotemporal gait and kinematic parameters of demanding tests or specific to an affected limb are the most used measures.In activity monitoring (AM), accelerometry has mainly been used to derive energy expenditure or general health related parameters such as total step counts.In orthopaedics and the elderly, counting specific events such as stairs or high intensity activities were clinimetrically most powerful; as were qualitative parameters at the 'micro-level' of activity such as step frequency or sit-stand duration.Low cost and ease of use allow routine clinical application but with many options for sensors, algorithms, test and parameter definitions, choice and comparability remain difficult, calling for consensus or standardisation. Cite this article: Grimm B, Bolink S. Evaluating physical function and activity in the elderly patient using wearable motion sensors. EFORT Open Rev 2016;1:112-120. DOI: 10.1302/2058-5241.1.160022.

  20. Evaluating physical function and activity in the elderly patient using wearable motion sensors

    PubMed Central

    Grimm, Bernd; Bolink, Stijn

    2016-01-01

    Wearable sensors, in particular inertial measurement units (IMUs) allow the objective, valid, discriminative and responsive assessment of physical function during functional tests such as gait, stair climbing or sit-to-stand. Applied to various body segments, precise capture of time-to-task achievement, spatiotemporal gait and kinematic parameters of demanding tests or specific to an affected limb are the most used measures. In activity monitoring (AM), accelerometry has mainly been used to derive energy expenditure or general health related parameters such as total step counts. In orthopaedics and the elderly, counting specific events such as stairs or high intensity activities were clinimetrically most powerful; as were qualitative parameters at the ‘micro-level’ of activity such as step frequency or sit-stand duration. Low cost and ease of use allow routine clinical application but with many options for sensors, algorithms, test and parameter definitions, choice and comparability remain difficult, calling for consensus or standardisation. Cite this article: Grimm B, Bolink S. Evaluating physical function and activity in the elderly patient using wearable motion sensors. EFORT Open Rev 2016;1:112–120. DOI: 10.1302/2058-5241.1.160022. PMID:28461937

  1. Frictional families in 2D experimental disks under periodic gravitational compaction

    NASA Astrophysics Data System (ADS)

    Hubard, Aline; Shattuck, Mark; O'Hern, Corey

    2014-03-01

    We studied a bidisperse system with diameter ratio 1.2 consisting of four 1.26cm and three 1.57cm stainless steel cylinders confined between two glass plates separated 1.05 times their thickness with the cylinder axis perpendicular to gravity. The particles initially resting on a movable piston are thrown upward and allowed to come to rest. In general this frictional state is stabilized by both normal and tangential (frictional) forces. We then apply short (10ms) small amplitude bursts of 440Hz vibration, temporarily breaking tangential forces and then allow the system to re-stabilize. After N of these compaction steps the number of contacts will increase to an isostatic friction-less state and additional steps do not change the system. Many frictional states reach the same final friction-less state. We find that this evolution is determined by the projection of the gravity vector on the null space of the dynamical matrix of a normal spring network formed from the contacts of the frictional state. Thus each frictional contact network follow a one-dimensional path (or family) through phase space under gravitational compaction. PREM-DMR0934206.

  2. Using a two-step matrix solution to reduce the run time in KULL's magnetic diffusion package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunner, T A; Kolev, T V

    2010-12-17

    Recently a Resistive Magnetohydrodynamics (MHD) package has been added to the KULL code. In order to be compatible with the underlying hydrodynamics algorithm, a new sub-zonal magnetics discretization was developed that supports arbitrary polygonal and polyhedral zones. This flexibility comes at the cost of many more unknowns per zone - approximately ten times more for a hexahedral mesh. We can eliminate some (or all, depending on the dimensionality) of the extra unknowns from the global matrix during assembly by using a Schur complement approach. This trades expensive global work for cache-friendly local work, while still allowing solution for the fullmore » system. Significant improvements in the solution time are observed for several test problems.« less

  3. Subsystem real-time time dependent density functional theory.

    PubMed

    Krishtal, Alisa; Ceresoli, Davide; Pavanello, Michele

    2015-04-21

    We present the extension of Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) to real-time Time Dependent Density Functional Theory (rt-TDDFT). FDE is a DFT-in-DFT embedding method that allows to partition a larger Kohn-Sham system into a set of smaller, coupled Kohn-Sham systems. Additional to the computational advantage, FDE provides physical insight into the properties of embedded systems and the coupling interactions between them. The extension to rt-TDDFT is done straightforwardly by evolving the Kohn-Sham subsystems in time simultaneously, while updating the embedding potential between the systems at every time step. Two main applications are presented: the explicit excitation energy transfer in real time between subsystems is demonstrated for the case of the Na4 cluster and the effect of the embedding on optical spectra of coupled chromophores. In particular, the importance of including the full dynamic response in the embedding potential is demonstrated.

  4. Evaluation of synthase and hemisynthase activities of glucosamine-6-phosphate synthase by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Gaucher-Wieczorek, Florence; Guérineau, Vincent; Touboul, David; Thétiot-Laurent, Sophie; Pelissier, Franck; Badet-Denisot, Marie-Ange; Badet, Bernard; Durand, Philippe

    2014-08-01

    Glucosamine-6-phosphate synthase (GlmS, EC 2.6.1.16) catalyzes the first and rate-limiting step in the hexosamine biosynthetic pathway, leading to the synthesis of uridine-5'-diphospho-N-acetyl-D-glucosamine, the major building block for the edification of peptidoglycan in bacteria, chitin in fungi, and glycoproteins in mammals. This bisubstrate enzyme converts D-fructose-6-phosphate (Fru-6P) and L-glutamine (Gln) into D-glucosamine-6-phosphate (GlcN-6P) and L-glutamate (Glu), respectively. We previously demonstrated that matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) allows determination of the kinetic parameters of the synthase activity. We propose here to refine the experimental protocol to quantify Glu and GlcN-6P, allowing determination of both hemisynthase and synthase parameters from a single assay kinetic experiment, while avoiding interferences encountered in other assays. It is the first time that MALDI-MS is used to survey the activity of a bisubstrate enzyme. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Tokamak magneto-hydrodynamics and reference magnetic coordinates for simulations of plasma disruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakharov, Leonid E.; Li, Xujing

    This paper formulates the Tokamak Magneto-Hydrodynamics (TMHD), initially outlined by X. Li and L. E. Zakharov [Plasma Science and Technology 17(2), 97–104 (2015)] for proper simulations of macroscopic plasma dynamics. The simplest set of magneto-hydrodynamics equations, sufficient for disruption modeling and extendable to more refined physics, is explained in detail. First, the TMHD introduces to 3-D simulations the Reference Magnetic Coordinates (RMC), which are aligned with the magnetic field in the best possible way. The numerical implementation of RMC is adaptive grids. Being consistent with the high anisotropy of the tokamak plasma, RMC allow simulations at realistic, very high plasmamore » electric conductivity. Second, the TMHD splits the equation of motion into an equilibrium equation and the plasma advancing equation. This resolves the 4 decade old problem of Courant limitations of the time step in existing, plasma inertia driven numerical codes. The splitting allows disruption simulations on a relatively slow time scale in comparison with the fast time of ideal MHD instabilities. A new, efficient numerical scheme is proposed for TMHD.« less

  6. A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories

    NASA Astrophysics Data System (ADS)

    Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan

    2015-07-01

    Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans can help improve dose conformity, homogeneity, and organ sparing simultaneously using the same beam trajectory length and delivery time as a coplanar VMAT plan.

  7. Quick-EXAFS setup at the SuperXAS beamline for in situ X-ray absorption spectroscopy with 10 ms time resolution

    PubMed Central

    Müller, Oliver; Nachtegaal, Maarten; Just, Justus; Lützenkirchen-Hecht, Dirk; Frahm, Ronald

    2016-01-01

    The quick-EXAFS (QEXAFS) method adds time resolution to X-ray absorption spectroscopy (XAS) and allows dynamic structural changes to be followed. A completely new QEXAFS setup consisting of monochromator, detectors and data acquisition system is presented, as installed at the SuperXAS bending-magnet beamline at the Swiss Light Source (Paul Scherrer Institute, Switzerland). The monochromator uses Si(111) and Si(311) channel-cut crystals mounted on one crystal stage, and remote exchange allows an energy range from 4.0 keV to 32 keV to be covered. The spectral scan range can be electronically adjusted up to several keV to cover multiple absorption edges in one scan. The determination of the Bragg angle close to the position of the crystals allows high-accuracy measurements. Absorption spectra can be acquired with fast gridded ionization chambers at oscillation frequencies of up to 50 Hz resulting in a time resolution of 10 ms, using both scan directions of each oscillation period. The carefully developed low-noise detector system yields high-quality absorption data. The unique setup allows both state-of-the-art QEXAFS and stable step-scan operation without the need to exchange whole monochromators. The long-term stability of the Bragg angle was investigated and absorption spectra of reference materials as well as of a fast chemical reaction demonstrate the overall capabilities of the new setup. PMID:26698072

  8. Quick-EXAFS setup at the SuperXAS beamline for in situ X-ray absorption spectroscopy with 10 ms time resolution.

    PubMed

    Müller, Oliver; Nachtegaal, Maarten; Just, Justus; Lützenkirchen-Hecht, Dirk; Frahm, Ronald

    2016-01-01

    The quick-EXAFS (QEXAFS) method adds time resolution to X-ray absorption spectroscopy (XAS) and allows dynamic structural changes to be followed. A completely new QEXAFS setup consisting of monochromator, detectors and data acquisition system is presented, as installed at the SuperXAS bending-magnet beamline at the Swiss Light Source (Paul Scherrer Institute, Switzerland). The monochromator uses Si(111) and Si(311) channel-cut crystals mounted on one crystal stage, and remote exchange allows an energy range from 4.0 keV to 32 keV to be covered. The spectral scan range can be electronically adjusted up to several keV to cover multiple absorption edges in one scan. The determination of the Bragg angle close to the position of the crystals allows high-accuracy measurements. Absorption spectra can be acquired with fast gridded ionization chambers at oscillation frequencies of up to 50 Hz resulting in a time resolution of 10 ms, using both scan directions of each oscillation period. The carefully developed low-noise detector system yields high-quality absorption data. The unique setup allows both state-of-the-art QEXAFS and stable step-scan operation without the need to exchange whole monochromators. The long-term stability of the Bragg angle was investigated and absorption spectra of reference materials as well as of a fast chemical reaction demonstrate the overall capabilities of the new setup.

  9. Learning and study strategies correlate with medical students' performance in anatomical sciences.

    PubMed

    Khalil, Mohammed K; Williams, Shanna E; Gregory Hawkins, H

    2018-05-06

    Much of the content delivered during medical students' preclinical years is assessed nationally by such testing as the United States Medical Licensing Examination ® (USMLE ® ) Step 1 and Comprehensive Osteopathic Medical Licensing Examination ® (COMPLEX-USA ® ) Step 1. Improvement of student study/learning strategies skills is associated with academic success in internal and external (USMLE Step 1) examinations. This research explores the strength of association between the Learning and Study Strategies Inventory (LASSI) scores and student performance in the anatomical sciences and USMLE Step 1 examinations. The LASSI inventory assesses learning and study strategies based on ten subscale measures. These subscales include three components of strategic learning: skill (Information processing, Selecting main ideas, and Test strategies), will (Anxiety, Attitude, and Motivation) and self-regulation (Concentration, Time management, Self-testing, and Study aid). During second year (M2) orientation, 180 students (Classes of 2016, 2017, and 2018) were administered the LASSI survey instrument. Pearson Product-Moment correlation analyses identified significant associations between five of the ten LASSI subscales (Anxiety, Information processing, Motivation, Selecting main idea, and Test strategies) and students' performance in the anatomical sciences and USMLE Step 1 examinations. Identification of students lacking these skills within the anatomical sciences curriculum allows targeted interventions, which not only maximize academic achievement in an aspect of an institution's internal examinations, but in the external measure of success represented by USMLE Step 1 scores. Anat Sci Educ 11: 236-242. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  10. Accurate spectral solutions for the parabolic and elliptic partial differential equations by the ultraspherical tau method

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Abd-Elhameed, W. M.

    2005-09-01

    We present a double ultraspherical spectral methods that allow the efficient approximate solution for the parabolic partial differential equations in a square subject to the most general inhomogeneous mixed boundary conditions. The differential equations with their boundary and initial conditions are reduced to systems of ordinary differential equations for the time-dependent expansion coefficients. These systems are greatly simplified by using tensor matrix algebra, and are solved by using the step-by-step method. Numerical applications of how to use these methods are described. Numerical results obtained compare favorably with those of the analytical solutions. Accurate double ultraspherical spectral approximations for Poisson's and Helmholtz's equations are also noted. Numerical experiments show that spectral approximation based on Chebyshev polynomials of the first kind is not always better than others based on ultraspherical polynomials.

  11. Towards potential nanoparticle contrast agents: Synthesis of new functionalized PEG bisphosphonates

    PubMed Central

    Kachbi-Khelfallah, Souad; Monteil, Maelle; Cortes-Clerget, Margery; Migianu-Griffoni, Evelyne; Pirat, Jean-Luc; Gager, Olivier; Deschamp, Julia

    2016-01-01

    Summary The use of nanotechnologies for biomedical applications took a real development during these last years. To allow an effective targeting for biomedical imaging applications, the adsorption of plasmatic proteins on the surface of nanoparticles must be prevented to reduce the hepatic capture and increase the plasmatic time life. In biologic media, metal oxide nanoparticles are not stable and must be coated by biocompatible organic ligands. The use of phosphonate ligands to modify the nanoparticle surface drew a lot of attention in the last years for the design of highly functional hybrid materials. Here, we report a methodology to synthesize bisphosphonates having functionalized PEG side chains with different lengths. The key step is a procedure developed in our laboratory to introduce the bisphosphonate from acyl chloride and tris(trimethylsilyl)phosphite in one step. PMID:27559386

  12. Method to increase the toughness of aluminum-lithium alloys at cryogenic temperatures

    NASA Technical Reports Server (NTRS)

    Sankaran, Krishnan K. (Inventor); Sova, Brian J. (Inventor); Babel, Henry W. (Inventor)

    2006-01-01

    A method to increase the toughness of the aluminum-lithium alloy C458 and similar alloys at cryogenic temperatures above their room temperature toughness is provided. Increasing the cryogenic toughness of the aluminum-lithium alloy C458 allows the use of alloy C458 for cryogenic tanks, for example for launch vehicles in the aerospace industry. A two-step aging treatment for alloy C458 is provided. A specific set of times and temperatures to age the aluminum-lithium alloy C458 to T8 temper is disclosed that results in a higher toughness at cryogenic temperatures compared to room temperature. The disclosed two-step aging treatment for alloy 458 can be easily practiced in the manufacturing process, does not involve impractical heating rates or durations, and does not degrade other material properties.

  13. Feasibility study of consolidation by direct compaction and friction stir processing of commercially pure titanium powder

    NASA Astrophysics Data System (ADS)

    Nichols, Leannah M.

    Commercially pure titanium can take up to six months to successfully manufacture a six-inch in diameter ingot in which can be shipped to be melted and shaped into other useful components. The applications to the corrosion-resistant, light weight, strong metal are endless, yet so is the manufacturing processing time. At a cost of around $80 per pound of certain grades of titanium powder, the everyday consumer cannot afford to use titanium in the many ways it is beneficial simply because the number of processing steps it takes to manufacture consumes too much time, energy, and labor. In this research, the steps it takes from the raw powder form to the final part are proposed to be reduced from 4-8 steps to only 2 steps utilizing a new technology that may even improve upon the titanium properties at the same time as it is reducing the number of steps of manufacture. The two-step procedure involves selecting a cylindrical or rectangular die and punch to compress a small amount of commercially pure titanium to a strong-enough compact for transportation to the friction stir welder to be consolidated. Friction stir welding invented in 1991 in the United Kingdom uses a tool, similar to a drill bit, to approach a sample and gradually plunge into the material at a certain rotation rate of between 100 to 2,100 RPM. In the second step, the friction stir welder is used to process the titanium powder held in a tight holder to consolidate into a harder titanium form. The resulting samples are cut to expose the cross section and then grinded, polished, and cleaned to be observed and tested using scanning electron microscopy (SEM), electron dispersive spectroscopy (EDS), and a Vickers microhardness tester. The results were that the thicker the sample, the harder the resulting consolidated sample peaking at 2 to 3 times harder than that of the original commercially pure titanium in solid form at a peak value of 435.9 hardness and overall average of 251.13 hardness. The combined results of the SEM and EDS have shown that the mixing of the sample holder material, titanium, and tool material were not of a large amount and therefore proves the feasibility of this study. This study should be continued to lessen the labor, energy, and cost of the production of titanium to therefore allow titanium to be improved upon and be more efficient for many applications across many industries.

  14. Use of distributed water level and soil moisture data in the evaluation of the PUMMA periurban distributed hydrological model: application to the Mercier catchment, France

    NASA Astrophysics Data System (ADS)

    Braud, Isabelle; Fuamba, Musandji; Branger, Flora; Batchabani, Essoyéké; Sanzana, Pedro; Sarrazin, Benoit; Jankowfsky, Sonja

    2016-04-01

    Distributed hydrological models are used at best when their outputs are compared not only to the outlet discharge, but also to internal observed variables, so that they can be used as powerful hypothesis-testing tools. In this paper, the interest of distributed networks of sensors for evaluating a distributed model and the underlying functioning hypotheses is explored. Two types of data are used: surface soil moisture and water level in streams. The model used in the study is the periurban PUMMA (Peri-Urban Model for landscape Management, Jankowfsky et al., 2014), that is applied to the Mercier catchment (6.7 km2) a semi-rural catchment with 14% imperviousness, located close to Lyon, France where distributed water level (13 locations) and surface soil moisture data (9 locations) are available. Model parameters are specified using in situ information or the results of previous studies, without any calibration and the model is run for four years from January 1st 2007 to December 31st 2010 with a variable time step for rainfall and an hourly time step for reference evapotranspiration. The model evaluation protocol was guided by the available data and how they can be interpreted in terms of hydrological processes and constraints for the model components and parameters. We followed a stepwise approach. The first step was a simple model water balance assessment, without comparison to observed data. It can be interpreted as a basic quality check for the model, ensuring that it conserves mass, makes the difference between dry and wet years, and reacts to rainfall events. The second step was an evaluation against observed discharge data at the outlet, using classical performance criteria. It gives a general picture of the model performance and allows to comparing it to other studies found in the literature. In the next steps (steps 3 to 6), focus was made on more specific hydrological processes. In step 3, distributed surface soil moisture data was used to assess the relevance of the simulated seasonal soil water storage dynamics. In step 4, we evaluated the base flow generation mechanisms in the model through comparison with continuous water level data transformed into stream intermittency statistics. In step 5, the water level data was used again but at the event time scale, to evaluate the fast flow generation components through comparison of modelled and observed reaction and response times. Finally, in step 6, we studied correlation between observed and simulated reaction and response times and various characteristics of the rainfall events (rain volume, intensity) and antecedent soil moisture, to see if the model was able to reproduce the observed features as described in Sarrazin (2012). The results show that the model is able to represent satisfactorily the soil water storage dynamics and stream intermittency. On the other hand, the model does not reproduce the response times and the difference in response between forested and agricultural areas. References: Jankowfsky et al., 2014. Assessing anthropogenic influence on the hydrology of small peri-urban catchments: Development of the object-oriented PUMMA model by integrating urban and rural hydrological models. J. Hydrol., 517, 1056-1071 Sarrazin, B., 2012. MNT et observations multi-locales du réseau hydrographique d'un petit bassin versant rural dans une perspective d'aide à la modélisation hydrologique. Ecole doctorale Terre, Univers, Environnement. l'Institut National Polytechnique de Grenoble, 269 pp (in French).

  15. Development of a Real-Time PCR for a Sensitive One-Step Coprodiagnosis Allowing both the Identification of Carnivore Feces and the Detection of Toxocara spp. and Echinococcus multilocularis

    PubMed Central

    Umhang, Gérald; Poulle, Marie-Lazarine; Millon, Laurence

    2016-01-01

    Studying the environmental occurrence of parasites of concern for humans and animals based on coprosamples is an expanding field of work in epidemiology and the ecology of health. Detecting and quantifying Toxocara spp. and Echinococcus multilocularis, two predominant zoonotic helminths circulating in European carnivores, in feces may help to better target measures for prevention. A rapid, sensitive, and one-step quantitative PCR (qPCR) allowing detection of E. multilocularis and Toxocara spp. was developed in the present study, combined with a host fecal test based on the identification of three carnivores (red fox, dog, and cat) involved in the life cycles of these parasites. A total of 68 coprosamples were collected from identified specimens from Vulpes vulpes, Canis lupus familiaris, Canis lupus, Felis silvestris catus, Meles meles, Martes foina, and Martes martes. With DNA coprosamples, real-time PCR was performed in duplex with a qPCR inhibitor control specifically designed for this study. All the coprosample host identifications were confirmed by qPCR combined with sequencing, and parasites were detected and confirmed (E. multilocularis in red foxes and Toxocara cati in cats; 16% of samples presented inhibition). By combining parasite detection and quantification, the host fecal test, and a new qPCR inhibitor control, we created a technique with a high sensitivity that may considerably improve environmental studies of pathogens. PMID:26969697

  16. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome.

    PubMed

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the "connectome". Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/.

  17. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome

    PubMed Central

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the “connectome”. Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/. PMID:26379232

  18. Development of a Real-Time PCR for a Sensitive One-Step Coprodiagnosis Allowing both the Identification of Carnivore Feces and the Detection of Toxocara spp. and Echinococcus multilocularis.

    PubMed

    Knapp, Jenny; Umhang, Gérald; Poulle, Marie-Lazarine; Millon, Laurence

    2016-05-15

    Studying the environmental occurrence of parasites of concern for humans and animals based on coprosamples is an expanding field of work in epidemiology and the ecology of health. Detecting and quantifying Toxocara spp. and Echinococcus multilocularis, two predominant zoonotic helminths circulating in European carnivores, in feces may help to better target measures for prevention. A rapid, sensitive, and one-step quantitative PCR (qPCR) allowing detection of E. multilocularis and Toxocara spp. was developed in the present study, combined with a host fecal test based on the identification of three carnivores (red fox, dog, and cat) involved in the life cycles of these parasites. A total of 68 coprosamples were collected from identified specimens from Vulpes vulpes, Canis lupus familiaris, Canis lupus, Felis silvestris catus, Meles meles, Martes foina, and Martes martes With DNA coprosamples, real-time PCR was performed in duplex with a qPCR inhibitor control specifically designed for this study. All the coprosample host identifications were confirmed by qPCR combined with sequencing, and parasites were detected and confirmed (E. multilocularis in red foxes and Toxocara cati in cats; 16% of samples presented inhibition). By combining parasite detection and quantification, the host fecal test, and a new qPCR inhibitor control, we created a technique with a high sensitivity that may considerably improve environmental studies of pathogens. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  19. Kinesin Steps Do Not Alternate in Size☆

    PubMed Central

    Fehr, Adrian N.; Asbury, Charles L.; Block, Steven M.

    2008-01-01

    Abstract Kinesin is a two-headed motor protein that transports cargo inside cells by moving stepwise on microtubules. Its exact trajectory along the microtubule is unknown: alternative pathway models predict either uniform 8-nm steps or alternating 7- and 9-nm steps. By analyzing single-molecule stepping traces from “limping” kinesin molecules, we were able to distinguish alternate fast- and slow-phase steps and thereby to calculate the step sizes associated with the motions of each of the two heads. We also compiled step distances from nonlimping kinesin molecules and compared these distributions against models predicting uniform or alternating step sizes. In both cases, we find that kinesin takes uniform 8-nm steps, a result that strongly constrains the allowed models. PMID:18083906

  20. Brownian trail rectified

    NASA Astrophysics Data System (ADS)

    Hurd, Alan J.; Ho, Pauline

    The experiments described here indicate when one of Nature's best fractals -- the Brownian trail -- becomes nonfractal. In most ambient fluids, the trail of a Brownian particle is self-similar over many decades of length. For example, the trail of a submicron particle suspended in an ordinary liquid, recorded at equal time intervals, exhibits apparently discontinuous changes in velocity from macroscopic lengths down to molecular lengths: the trail is a random walk with no velocity memory from one step to the next. In ideal Brownian motion, the kinks in the trail persist to infinitesimal time intervals, i.e., it is a curve without tangents. Even in real Brownian motion in a liquid, the time interval must be shortened to approximately 10(-8) s before the velocity appears continuous. In sufficiently rarefied environments, this time resolution at which a Brownian trail is rectified from a curve without tangents to a smoothly varying trajectory is greatly lengthened, making it possible to study the kinetic regime by dynamic light scattering. Our recent experiments with particles in a plasma have demonstrated this capability. In this regime, the particle velocity persists over a finite step length allowing an analogy to an ideal gas with Maxwell-Boltzmann velocities; the particle mass could be obtained from equipartition. The crossover from ballistic flight to hydrodynamic diffusion was also seen.

  1. High-quality and interactive animations of 3D time-varying vector fields.

    PubMed

    Helgeland, Anders; Elboth, Thomas

    2006-01-01

    In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.

  2. Three-dimensional planning in craniomaxillofacial surgery

    PubMed Central

    Rubio-Palau, Josep; Prieto-Gundin, Alejandra; Cazalla, Asteria Albert; Serrano, Miguel Bejarano; Fructuoso, Gemma Garcia; Ferrandis, Francisco Parri; Baró, Alejandro Rivera

    2016-01-01

    Introduction: Three-dimensional (3D) planning in oral and maxillofacial surgery has become a standard in the planification of a variety of conditions such as dental implants and orthognathic surgery. By using custom-made cutting and positioning guides, the virtual surgery is exported to the operating room, increasing precision and improving results. Materials and Methods: We present our experience in the treatment of craniofacial deformities with 3D planning. Software to plan the different procedures has been selected for each case, depending on the procedure (Nobel Clinician, Kodak 3DS, Simplant O&O, Dolphin 3D, Timeus, Mimics and 3-Matic). The treatment protocol is exposed step by step from virtual planning, design, and printing of the cutting and positioning guides to patients’ outcomes. Conclusions: 3D planning reduces the surgical time and allows predicting possible difficulties and complications. On the other hand, it increases preoperative planning time and needs a learning curve. The only drawback is the cost of the procedure. At present, the additional preoperative work can be justified because of surgical time reduction and more predictable results. In the future, the cost and time investment will be reduced. 3D planning is here to stay. It is already a fact in craniofacial surgery and the investment is completely justified by the risk reduction and precise results. PMID:28299272

  3. Three-dimensional planning in craniomaxillofacial surgery.

    PubMed

    Rubio-Palau, Josep; Prieto-Gundin, Alejandra; Cazalla, Asteria Albert; Serrano, Miguel Bejarano; Fructuoso, Gemma Garcia; Ferrandis, Francisco Parri; Baró, Alejandro Rivera

    2016-01-01

    Three-dimensional (3D) planning in oral and maxillofacial surgery has become a standard in the planification of a variety of conditions such as dental implants and orthognathic surgery. By using custom-made cutting and positioning guides, the virtual surgery is exported to the operating room, increasing precision and improving results. We present our experience in the treatment of craniofacial deformities with 3D planning. Software to plan the different procedures has been selected for each case, depending on the procedure (Nobel Clinician, Kodak 3DS, Simplant O&O, Dolphin 3D, Timeus, Mimics and 3-Matic). The treatment protocol is exposed step by step from virtual planning, design, and printing of the cutting and positioning guides to patients' outcomes. 3D planning reduces the surgical time and allows predicting possible difficulties and complications. On the other hand, it increases preoperative planning time and needs a learning curve. The only drawback is the cost of the procedure. At present, the additional preoperative work can be justified because of surgical time reduction and more predictable results. In the future, the cost and time investment will be reduced. 3D planning is here to stay. It is already a fact in craniofacial surgery and the investment is completely justified by the risk reduction and precise results.

  4. Personalized Physical Activity Coaching: A Machine Learning Approach

    PubMed Central

    Dijkhuis, Talko B.; van Ittersum, Miriam W.; Velthuijsen, Hugo

    2018-01-01

    Living a sedentary lifestyle is one of the major causes of numerous health problems. To encourage employees to lead a less sedentary life, the Hanze University started a health promotion program. One of the interventions in the program was the use of an activity tracker to record participants' daily step count. The daily step count served as input for a fortnightly coaching session. In this paper, we investigate the possibility of automating part of the coaching procedure on physical activity by providing personalized feedback throughout the day on a participant’s progress in achieving a personal step goal. The gathered step count data was used to train eight different machine learning algorithms to make hourly estimations of the probability of achieving a personalized, daily steps threshold. In 80% of the individual cases, the Random Forest algorithm was the best performing algorithm (mean accuracy = 0.93, range = 0.88–0.99, and mean F1-score = 0.90, range = 0.87–0.94). To demonstrate the practical usefulness of these models, we developed a proof-of-concept Web application that provides personalized feedback about whether a participant is expected to reach his or her daily threshold. We argue that the use of machine learning could become an invaluable asset in the process of automated personalized coaching. The individualized algorithms allow for predicting physical activity during the day and provides the possibility to intervene in time. PMID:29463052

  5. Properties of true quaternary fission of nuclei with allowance for its multistep and sequential character

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadmensky, S. G., E-mail: kadmensky@phys.vsu.ru; Titova, L. V.; Bulychev, A. O.

    An analysis of basicmechanisms of binary and ternary fission of nuclei led to the conclusion that true ternary and quaternary fission of nuclei has a sequential two-step (three-step) character, where, at the first step, a fissile nucleus emits a third light particle (third and fourth light particles) under shakeup effects associated with a nonadiabatic character of its collective deformation motion, whereupon the residual nucleus undergoes fission to two fission fragments. Owing to this, the formulas derived earlier for the widths with respect to sequential two- and three-step decays of nuclei in constructing the theory of two-step twoproton decays and multistepmore » decays in chains of genetically related nuclei could be used to describe the relative yields and angular and energy distributions of third and fourth light particles emitted in (α, α), (t, t), and (α, t) pairs upon the true quaternary spontaneous fission of {sup 252}Cf and thermal-neutron-induced fission of {sup 235}U and {sup 233}U target nuclei. Mechanisms that explain a sharp decrease in the yield of particles appearing second in time and entering into the composition of light-particle pairs that originate from true quaternary fission of nuclei in relation to the yields of analogous particles in true ternary fission of nuclei are proposed.« less

  6. Forecasting air quality time series using deep learning.

    PubMed

    Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse

    2018-04-13

    This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution concentration while only monitoring key parameters and without transforming the data set in its entirety, thus allowing real time inputs and continuous prediction.

  7. Dual-probe real-time PCR assay for detection of variola or other orthopoxviruses with dried reagents.

    PubMed

    Aitichou, Mohamed; Saleh, Sharron; Kyusung, Park; Huggins, John; O'Guinn, Monica; Jahrling, Peter; Ibrahim, Sofi

    2008-11-01

    A real-time, multiplexed polymerase chain reaction (PCR) assay based on dried PCR reagents was developed. Only variola virus could be specifically detected by a FAM (6-carboxyfluorescein)-labeled probe while camelpox, cowpox, monkeypox and vaccinia viruses could be detected by a TET (6-carboxytetramethylrhodamine)-labeled probe in a single PCR reaction. Approximately 25 copies of cloned variola virus DNA and 50 copies of genomic orthopoxviruses DNA could be detected with high reproducibility. The assay exhibited a dynamic range of seven orders of magnitude with a correlation coefficient value greater than 0.97. The sensitivity and specificity of the assay, as determined from 100 samples that contained nucleic acids from a multitude of bacterial and viral species were 96% and 98%, respectively. The limit of detection, sensitivity and specificity of the assay were comparable to standard real-time PCR assays with wet reagents. Employing a multiplexed format in this assay allows simultaneous discrimination of the variola virus from other closely related orthopoxviruses. Furthermore, the implementation of dried reagents in real-time PCR assays is an important step towards simplifying such assays and allowing their use in areas where cold storage is not easily accessible.

  8. Application of adaptive gridding to magnetohydrodynamic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnack, D.D.; Lotatti, I.; Satyanarayana, P.

    1996-12-31

    The numerical simulation of the primitive, three-dimensional, time-dependent, resistive MHD equations on an unstructured, adaptive poloidal mesh using the TRIM code has been reported previously. The toroidal coordinate is approximated pseudo-spectrally with finite Fourier series and Fast-Fourier Transforms. The finite-volume algorithm preserves the magnetic field as solenoidal to round-off error, and also conserves mass, energy, and magnetic flux exactly. A semi-implicit method is used to allow for large time steps on the unstructured mesh. This is important for tokamak calculations where the relevant time scale is determined by the poloidal Alfven time. This also allows the viscosity to be treatedmore » implicitly. A conjugate-gradient method with pre-conditioning is used for matrix inversion. Applications to the growth and saturation of ideal instabilities in several toroidal fusion systems has been demonstrated. Recently we have concentrated on the details of the mesh adaption algorithm used in TRIM. We present several two-dimensional results relating to the use of grid adaptivity to track the evolution of hydrodynamic and MHD structures. Examples of plasma guns, opening switches, and supersonic flow over a magnetized sphere are presented. Issues relating to mesh adaption criteria are discussed.« less

  9. 40 CFR 49.153 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... this section). (C) Step 3. If any of the emissions units affected by your proposed modification result... not subject to this program. (2) Increase in an emissions unit's annual allowable emissions limit. If... emissions unit's allowable emissions of a regulated NSR pollutant above its existing annual allowable...

  10. 40 CFR 49.153 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... this section). (C) Step 3. If any of the emissions units affected by your proposed modification result... not subject to this program. (2) Increase in an emissions unit's annual allowable emissions limit. If... emissions unit's allowable emissions of a regulated NSR pollutant above its existing annual allowable...

  11. 40 CFR 49.153 - Applicability.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... this section). (C) Step 3. If any of the emissions units affected by your proposed modification result... not subject to this program. (2) Increase in an emissions unit's annual allowable emissions limit. If... emissions unit's allowable emissions of a regulated NSR pollutant above its existing annual allowable...

  12. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    NASA Astrophysics Data System (ADS)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  13. PC_Eyewitness: evaluating the New Jersey method.

    PubMed

    MacLin, Otto H; Phelan, Colin M

    2007-05-01

    One important variable in eyewitness identification research is lineup administration procedure. Lineups administered sequentially (one at a time) have been shown to reduce the number of false identifications in comparison with those administered simultaneously (all at once). As a result, some policymakers have adopted sequential administration. However, they have made slight changes to the method used in psychology laboratories. Eyewitnesses in the field are allowed to take multiple passes through a lineup, whereas participants in the laboratory are allowed only one pass. PC_Eyewitness (PCE) is a computerized system used to construct and administer simultaneous or sequential lineups in both the laboratory and the field. It is currently being used in laboratories investigating eyewitness identification in the United States, Canada, and abroad. A modified version of PCE is also being developed for a local police department. We developed a new module for PCE, the New Jersey module, to examine the effects of a second pass. We found that the sequential advantage was eliminated when the participants were allowed to view the lineup a second time. The New Jersey module, and steps we are taking to improve on the module, are presented here and are being made available to the research and law enforcement communities.

  14. Molecular dynamics simulations in hybrid particle-continuum schemes: Pitfalls and caveats

    NASA Astrophysics Data System (ADS)

    Stalter, S.; Yelash, L.; Emamy, N.; Statt, A.; Hanke, M.; Lukáčová-Medvid'ová, M.; Virnau, P.

    2018-03-01

    Heterogeneous multiscale methods (HMM) combine molecular accuracy of particle-based simulations with the computational efficiency of continuum descriptions to model flow in soft matter liquids. In these schemes, molecular simulations typically pose a computational bottleneck, which we investigate in detail in this study. We find that it is preferable to simulate many small systems as opposed to a few large systems, and that a choice of a simple isokinetic thermostat is typically sufficient while thermostats such as Lowe-Andersen allow for simulations at elevated viscosity. We discuss suitable choices for time steps and finite-size effects which arise in the limit of very small simulation boxes. We also argue that if colloidal systems are considered as opposed to atomistic systems, the gap between microscopic and macroscopic simulations regarding time and length scales is significantly smaller. We propose a novel reduced-order technique for the coupling to the macroscopic solver, which allows us to approximate a non-linear stress-strain relation efficiently and thus further reduce computational effort of microscopic simulations.

  15. An array of antenna-coupled superconducting microbolometers for passive indoors real-time THz imaging

    NASA Astrophysics Data System (ADS)

    Luukanen, A.; Grönberg, L.; Helistö, P.; Penttilä, J. S.; Seppä, H.; Sipola, H.; Dietlein, C. R.; Grossman, E. N.

    2006-05-01

    The temperature resolving power (NETD) of millimeter wave imagers based on InP HEMT MMIC radiometers is typically about 1 K (30 ms), but the MMIC technology is limited to operating frequencies below ~ 150 GHz. In this paper we report the first results from a pixel developed for an eight pixel sub-array of superconducting antenna-coupled microbolometers, a first step towards a real-time imaging system, with frequency coverage of 0.2 - 3.6 THz. These detectors have demonstrated video-rate NETDs in the millikelvin range, close to the fundamental photon noise limit, when operated at a bath temperature of ~ 4K. The detectors will be operated within a turn-key cryogen-free pulse tube refrigerator, which allows for continuous operation without the need for liquid cryogens. The outstanding frequency agility of bolometric detectors allows for multi-frequency imaging, which greatly enhances the discrimination of e.g. explosives against innoncuous items concealed underneath clothing.

  16. Development of Loop-Mediated Isothermal Amplification (LAMP) Assay for Rapid and Sensitive Identification of Ostrich Meat

    PubMed Central

    Abdulmawjood, Amir; Grabowski, Nils; Fohler, Svenja; Kittler, Sophie; Nagengast, Helga; Klein, Guenter

    2014-01-01

    Animal species identification is one of the primary duties of official food control. Since ostrich meat is difficult to be differentiated macroscopically from beef, therefore new analytical methods are needed. To enforce labeling regulations for the authentication of ostrich meat, it might be of importance to develop and evaluate a rapid and reliable assay. In the present study, a loop-mediated isothermal amplification (LAMP) assay based on the cytochrome b gene of the mitochondrial DNA of the species Struthio camelus was developed. The LAMP assay was used in combination with a real-time fluorometer. The developed system allowed the detection of 0.01% ostrich meat products. In parallel, a direct swab method without nucleic acid extraction using the HYPLEX LPTV buffer was also evaluated. This rapid processing method allowed detection of ostrich meat without major incubation steps. In summary, the LAMP assay had excellent sensitivity and specificity for detecting ostrich meat and could provide a sampling-to-result identification-time of 15 to 20 minutes. PMID:24963709

  17. Transfer effects of step training on stepping performance in untrained directions in older adults: A randomized controlled trial.

    PubMed

    Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina

    2017-05-01

    Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Visual PEF Reader - VIPER

    NASA Technical Reports Server (NTRS)

    Luo, Victor; Khanampornpan, Teerapat; Boehmer, Rudy A.; Kim, Rachel Y.

    2011-01-01

    This software graphically displays all pertinent information from a Predicted Events File (PEF) using the Java Swing framework, which allows for multi-platform support. The PEF is hard to weed through when looking for specific information and it is a desire for the MRO (Mars Reconn aissance Orbiter) Mission Planning & Sequencing Team (MPST) to have a different way to visualize the data. This tool will provide the team with a visual way of reviewing and error-checking the sequence product. The front end of the tool contains much of the aesthetically appealing material for viewing. The time stamp is displayed in the top left corner, and highlighted details are displayed in the bottom left corner. The time bar stretches along the top of the window, and the rest of the space is allotted for blocks and step functions. A preferences window is used to control the layout of the sections along with the ability to choose color and size of the blocks. Double-clicking on a block will show information contained within the block. Zooming into a certain level will graphically display that information as an overlay on the block itself. Other functions include using hotkeys to navigate, an option to jump to a specific time, enabling a vertical line, and double-clicking to zoom in/out. The back end involves a configuration file that allows a more experienced user to pre-define the structure of a block, a single event, or a step function. The individual will have to determine what information is important within each block and what actually defines the beginning and end of a block. This gives the user much more flexibility in terms of what the tool is searching for. In addition to the configurability, all the settings in the preferences window are saved in the configuration file as well

  19. Coupled Flow and Mechanics in Porous and Fractured Media*

    NASA Astrophysics Data System (ADS)

    Martinez, M. J.; Newell, P.; Bishop, J.

    2012-12-01

    Numerical models describing subsurface flow through deformable porous materials are important for understanding and enabling energy security and climate security. Some applications of current interest come from such diverse areas as geologic sequestration of anthropogenic CO2, hydro-fracturing for stimulation of hydrocarbon reservoirs, and modeling electrochemistry-induced swelling of fluid-filled porous electrodes. Induced stress fields in any of these applications can lead to structural failure and fracture. The ultimate goal of this research is to model evolving faults and fracture networks and flow within the networks while coupling to flow and mechanics within the intact porous structure. We report here on a new computational capability for coupling of multiphase porous flow with geomechanics including assessment of over-pressure-induced structural damage. The geomechanics is coupled to the flow via the variation in the fluid pore pressures, whereas the flow problem is coupled to mechanics by the concomitant material strains which alter the pore volume (porosity field) and hence the permeability field. For linear elastic solid mechanics a monolithic coupling strategy is utilized. For nonlinear elastic/plastic and fractured media, a segregated coupling is presented. To facilitate coupling with disparate flow and mechanics time scales, the coupling strategy allows for different time steps in the flow solve compared to the mechanics solve. If time steps are synchronized, the controller allows user-specified intra-time-step iterations. The iterative coupling is dynamically controlled based on a norm measuring the degree of variation in the deformed porosity. The model is applied for evaluation of the integrity of jointed caprock systems during CO2 sequestration operations. Creation or reactivation of joints can lead to enhanced pathways for leakage. Similarly, over-pressures can induce flow along faults. Fluid flow rates in fractures are strongly dependent on the effective hydraulic aperture, which is a non-linear function of effective normal stress. The dynamically evolving aperture field updates the effective, anisotropic permeability tensor, thus resulting in a highly coupled multiphysics problem. Two models of geomechanical damage are discussed: critical shear-slip criteria and a sub-grid joint model. Leakage rates through the caprock resulting from the joint model are compared to those assuming intact material, allowing a correlation between potential for leakage and injection rates/pressures, for various in-situ stratigraphies. *This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energys National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  20. A semi-automatic method for quantification and classification of erythrocytes infected with malaria parasites in microscopic images.

    PubMed

    Díaz, Gloria; González, Fabio A; Romero, Eduardo

    2009-04-01

    Visual quantification of parasitemia in thin blood films is a very tedious, subjective and time-consuming task. This study presents an original method for quantification and classification of erythrocytes in stained thin blood films infected with Plasmodium falciparum. The proposed approach is composed of three main phases: a preprocessing step, which corrects luminance differences. A segmentation step that uses the normalized RGB color space for classifying pixels either as erythrocyte or background followed by an Inclusion-Tree representation that structures the pixel information into objects, from which erythrocytes are found. Finally, a two step classification process identifies infected erythrocytes and differentiates the infection stage, using a trained bank of classifiers. Additionally, user intervention is allowed when the approach cannot make a proper decision. Four hundred fifty malaria images were used for training and evaluating the method. Automatic identification of infected erythrocytes showed a specificity of 99.7% and a sensitivity of 94%. The infection stage was determined with an average sensitivity of 78.8% and average specificity of 91.2%.

  1. Optimization of an incubation step to maximize sulforaphane content in pre-processed broccoli.

    PubMed

    Mahn, Andrea; Pérez, Carmen

    2016-11-01

    Sulforaphane is a powerful anticancer compound, found naturally in food, which comes from the hydrolysis of glucoraphanin, the main glucosinolate of broccoli. The aim of this work was to maximize sulforaphane content in broccoli by designing an incubation step after subjecting broccoli pieces to an optimized blanching step. Incubation was optimized through a Box-Behnken design using ascorbic acid concentration, incubation temperature and incubation time as factors. The optimal incubation conditions were 38 °C for 3 h and 0.22 mg ascorbic acid per g fresh broccoli. The maximum sulforaphane concentration predicted by the model was 8.0 µmol g -1 , which was confirmed experimentally yielding a value of 8.1 ± 0.3 µmol g -1 . This represents a 585% increase with respect to fresh broccoli and a 119% increase in relation to blanched broccoli, equivalent to a conversion of 94% of glucoraphanin. The process proposed here allows maximizing sulforaphane content, thus avoiding artificial chemical synthesis. The compound could probably be isolated from broccoli, and may find application as nutraceutical or functional ingredient.

  2. Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume

    2016-03-01

    Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.

  3. A method for real-time generation of augmented reality work instructions via expert movements

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Bhaskar; Winer, Eliot

    2015-03-01

    Augmented Reality (AR) offers tremendous potential for a wide range of fields including entertainment, medicine, and engineering. AR allows digital models to be integrated with a real scene (typically viewed through a video camera) to provide useful information in a variety of contexts. The difficulty in authoring and modifying scenes is one of the biggest obstacles to widespread adoption of AR. 3D models must be created, textured, oriented and positioned to create the complex overlays viewed by a user. This often requires using multiple software packages in addition to performing model format conversions. In this paper, a new authoring tool is presented which uses a novel method to capture product assembly steps performed by a user with a depth+RGB camera. Through a combination of computer vision and imaging process techniques, each individual step is decomposed into objects and actions. The objects are matched to those in a predetermined geometry library and the actions turned into animated assembly steps. The subsequent instruction set is then generated with minimal user input. A proof of concept is presented to establish the method's viability.

  4. Self-powered integrated microfluidic point-of-care low-cost enabling (SIMPLE) chip

    PubMed Central

    Yeh, Erh-Chia; Fu, Chi-Cheng; Hu, Lucy; Thakur, Rohan; Feng, Jeffrey; Lee, Luke P.

    2017-01-01

    Portable, low-cost, and quantitative nucleic acid detection is desirable for point-of-care diagnostics; however, current polymerase chain reaction testing often requires time-consuming multiple steps and costly equipment. We report an integrated microfluidic diagnostic device capable of on-site quantitative nucleic acid detection directly from the blood without separate sample preparation steps. First, we prepatterned the amplification initiator [magnesium acetate (MgOAc)] on the chip to enable digital nucleic acid amplification. Second, a simplified sample preparation step is demonstrated, where the plasma is separated autonomously into 224 microwells (100 nl per well) without any hemolysis. Furthermore, self-powered microfluidic pumping without any external pumps, controllers, or power sources is accomplished by an integrated vacuum battery on the chip. This simple chip allows rapid quantitative digital nucleic acid detection directly from human blood samples (10 to 105 copies of methicillin-resistant Staphylococcus aureus DNA per microliter, ~30 min, via isothermal recombinase polymerase amplification). These autonomous, portable, lab-on-chip technologies provide promising foundations for future low-cost molecular diagnostic assays. PMID:28345028

  5. Error Awareness and Recovery in Conversational Spoken Language Interfaces

    DTIC Science & Technology

    2007-05-01

    portant step towards constructing autonomously self -improving systems. Furthermore, we developed a scalable, data-driven approach that allows a system...prob- lems in spoken dialog (as well as other interactive systems) and constitutes an important step towards building autonomously self -improving...implicitly-supervised learning approach is applicable to other problems, and represents an important step towards developing autonomous, self

  6. Multiple-time-stepping generalized hybrid Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escribano, Bruno, E-mail: bescribano@bcamath.org; Akhmatskaya, Elena; IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC).more » The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.« less

  7. Cost minimization analysis of different growth hormone pen devices based on time-and-motion simulations

    PubMed Central

    2010-01-01

    Background Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH), and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Methods Study objectives were to conduct 1) Time-and-Motion (TM) simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2) a Cost Minimization Analysis (CMA) relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution) via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1) Learning (initial use instructions), 2) Preparation (arrange device for use), 3) Administration (actual simulation manikin injection), and 4) Storage (maintain product viability between doses), in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages), non-drug medical supplies, and drug product costs. Results Norditropin® NordiFlex and Norditropin® NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsværd, Denmark) took less weekly Total Time (p < 0.05) to use than either of the comparator products, Genotropin® Pen (GTP, Pfizer, Inc, New York, New York) or HumatroPen® (HTP, Eli Lilly and Company, Indianapolis, Indiana). Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes), NNP (2.48 minutes) GTP (4.11 minutes), HTP (8.64 minutes), p < 0.05)). Administration and Storage times were not statistically different. NNF (15.8 minutes) and NNP (16.2 minutes) also took less time to Learn than HTP (24.0 minutes) and GTP (26.0 minutes), p < 0.05). The number of weekly required administration steps was also least with NNF and NNP. Opportunity cost savings were greater in devices that were easier to prepare for use; GTP represented an 11.8% drug product savings over NNF, NNP and HTP at time of study. Overall supply costs represented <1% of drug costs for all devices. Conclusions Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs. PMID:20377905

  8. Cost minimization analysis of different growth hormone pen devices based on time-and-motion simulations.

    PubMed

    Nickman, Nancy A; Haak, Sandra W; Kim, Jaewhan

    2010-04-08

    Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH), and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Study objectives were to conduct 1) Time-and-Motion (TM) simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2) a Cost Minimization Analysis (CMA) relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution) via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1) Learning (initial use instructions), 2) Preparation (arrange device for use), 3) Administration (actual simulation manikin injection), and 4) Storage (maintain product viability between doses), in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages), non-drug medical supplies, and drug product costs. Norditropin(R) NordiFlex and Norditropin(R) NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsvaerd, Denmark) took less weekly Total Time (p < 0.05) to use than either of the comparator products, Genotropin(R) Pen (GTP, Pfizer, Inc, New York, New York) or HumatroPen(R) (HTP, Eli Lilly and Company, Indianapolis, Indiana). Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes), NNP (2.48 minutes) GTP (4.11 minutes), HTP (8.64 minutes), p < 0.05)). Administration and Storage times were not statistically different. NNF (15.8 minutes) and NNP (16.2 minutes) also took less time to Learn than HTP (24.0 minutes) and GTP (26.0 minutes), p < 0.05). The number of weekly required administration steps was also least with NNF and NNP. Opportunity cost savings were greater in devices that were easier to prepare for use; GTP represented an 11.8% drug product savings over NNF, NNP and HTP at time of study. Overall supply costs represented <1% of drug costs for all devices. Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs.

  9. Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2017-03-01

    The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.

  10. Step-rate cut-points for physical activity intensity in patients with multiple sclerosis: The effect of disability status.

    PubMed

    Agiovlasitis, Stamatis; Sandroff, Brian M; Motl, Robert W

    2016-02-15

    Evaluating the relationship between step-rate and rate of oxygen uptake (VO2) may allow for practical physical activity assessment in patients with multiple sclerosis (MS) of differing disability levels. To examine whether the VO2 to step-rate relationship during over-ground walking differs across varying disability levels among patients with MS and to develop step-rate thresholds for moderate- and vigorous-intensity physical activity. Adults with MS (N=58; age: 51 ± 9 years; 48 women) completed one over-ground walking trial at comfortable speed, one at 0.22 m · s(-1) slower, and one at 0.22 m · s(-1) faster. Each trial lasted 6 min. VO2 was measured with portable spirometry and steps with hand-tally. Disability status was classified as mild, moderate, or severe based on Expanded Disability Status Scale scores. Multi-level regression indicated that step-rate, disability status, and height significantly predicted VO2 (p<0.05). Based on this model, we developed step-rate thresholds for activity intensity that vary by disability status and height. A separate regression without height allowed for development of step-rate thresholds that vary only by disability status. The VO2 during over-ground walking differs among ambulatory patients with MS based on disability level and height, yielding different step-rate thresholds for physical activity intensity. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Focal cryotherapy: step by step technique description

    PubMed Central

    Redondo, Cristina; Srougi, Victor; da Costa, José Batista; Baghdad, Mohammed; Velilla, Guillermo; Nunes-Silva, Igor; Bergerat, Sebastien; Garcia-Barreras, Silvia; Rozet, François; Ingels, Alexandre; Galiano, Marc; Sanchez-Salas, Rafael; Barret, Eric; Cathelineau, Xavier

    2017-01-01

    ABSTRACT Introduction and objective: Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa). The purpose of this video is to describe the procedure step by step. Materials and methods: We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. Results: The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipment utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40°C) to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1–5). Conclusions: Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment. PMID:28727387

  12. Focal cryotherapy: step by step technique description.

    PubMed

    Redondo, Cristina; Srougi, Victor; da Costa, José Batista; Baghdad, Mohammed; Velilla, Guillermo; Nunes-Silva, Igor; Bergerat, Sebastien; Garcia-Barreras, Silvia; Rozet, François; Ingels, Alexandre; Galiano, Marc; Sanchez-Salas, Rafael; Barret, Eric; Cathelineau, Xavier

    2017-01-01

    Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa). The purpose of this video is to describe the procedure step by step. We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipament utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40ºC) to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1-5). Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment. Copyright® by the International Brazilian Journal of Urology.

  13. Analysis of smear in high-resolution remote sensing satellites

    NASA Astrophysics Data System (ADS)

    Wahballah, Walid A.; Bazan, Taher M.; El-Tohamy, Fawzy; Fathy, Mahmoud

    2016-10-01

    High-resolution remote sensing satellites (HRRSS) that use time delay and integration (TDI) CCDs have the potential to introduce large amounts of image smear. Clocking and velocity mismatch smear are two of the key factors in inducing image smear. Clocking smear is caused by the discrete manner in which the charge is clocked in the TDI-CCDs. The relative motion between the HRRSS and the observed object obliges that the image motion velocity must be strictly synchronized with the velocity of the charge packet transfer (line rate) throughout the integration time. During imaging an object off-nadir, the image motion velocity changes resulting in asynchronization between the image velocity and the CCD's line rate. A Model for estimating the image motion velocity in HRRSS is derived. The influence of this velocity mismatch combined with clocking smear on the modulation transfer function (MTF) is investigated by using Matlab simulation. The analysis is performed for cross-track and along-track imaging with different satellite attitude angles and TDI steps. The results reveal that the velocity mismatch ratio and the number of TDI steps have a serious impact on the smear MTF; a velocity mismatch ratio of 2% degrades the MTFsmear by 32% at Nyquist frequency when the TDI steps change from 32 to 96. In addition, the results show that to achieve the requirement of MTFsmear >= 0.95 , for TDI steps of 16 and 64, the allowable roll angles are 13.7° and 6.85° and the permissible pitch angles are no more than 9.6° and 4.8°, respectively.

  14. Mass production of silicon pore optics for ATHENA

    NASA Astrophysics Data System (ADS)

    Wille, Eric; Bavdaz, Marcos; Collon, Maximilien

    2016-07-01

    Silicon Pore Optics (SPO) provide high angular resolution with low effective area density as required for the Advanced Telescope for High Energy Astrophysics (Athena). The x-ray telescope consists of several hundreds of SPO mirror modules. During the development of the process steps of the SPO technology, specific requirements of a future mass production have been considered right from the beginning. The manufacturing methods heavily utilise off-the-shelf equipment from the semiconductor industry, robotic automation and parallel processing. This allows to upscale the present production flow in a cost effective way, to produce hundreds of mirror modules per year. Considering manufacturing predictions based on the current technology status, we present an analysis of the time and resources required for the Athena flight programme. This includes the full production process starting with Si wafers up to the integration of the mirror modules. We present the times required for the individual process steps and identify the equipment required to produce two mirror modules per day. A preliminary timeline for building and commissioning the required infrastructure, and for flight model production of about 1000 mirror modules, is presented.

  15. Reversible simulation of irreversible computation

    NASA Astrophysics Data System (ADS)

    Li, Ming; Tromp, John; Vitányi, Paul

    1998-09-01

    Computer computations are generally irreversible while the laws of physics are reversible. This mismatch is penalized by among other things generating excess thermic entropy in the computation. Computing performance has improved to the extent that efficiency degrades unless all algorithms are executed reversibly, for example by a universal reversible simulation of irreversible computations. All known reversible simulations are either space hungry or time hungry. The leanest method was proposed by Bennett and can be analyzed using a simple ‘reversible’ pebble game. The reachable reversible simulation instantaneous descriptions (pebble configurations) of such pebble games are characterized completely. As a corollary we obtain the reversible simulation by Bennett and, moreover, show that it is a space-optimal pebble game. We also introduce irreversible steps and give a theorem on the tradeoff between the number of allowed irreversible steps and the memory gain in the pebble game. In this resource-bounded setting the limited erasing needs to be performed at precise instants during the simulation. The reversible simulation can be modified so that it is applicable also when the simulated computation time is unknown.

  16. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure

    NASA Astrophysics Data System (ADS)

    Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2013-10-01

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  17. Characterization of an Isolated Kidney's Vasculature for Use in Bio-Thermal Modeling

    NASA Astrophysics Data System (ADS)

    Payne, Allison H.; Parker, Dennis L.; Moellmer, Jeff; Roemer, Robert B.; Clifford, Sarah

    2007-05-01

    Accurate bio-thermal modeling requires site-specific modeling of discrete vascular anatomy. Presented herewith are several steps that have been developed to describe the vessel network of isolated canine and bovine kidneys. These perfused, isolated kidneys provide an environment to repeatedly test and improve acquisition methods to visualize the vascular anatomy, as well as providing a method to experimentally validate discrete vasculature thermal models. The organs are preserved using a previously developed methodology that keeps the vasculature intact, allowing for the organ to be perfused. It also allows for the repeated fixation and re-hydration of the same organ, permitting the comparison of various methods and models. The organ extraction, alcohol preservation, and perfusion of the organ are described. The vessel locations were obtained through a high-resolution time-of-flight (TOF) magnetic resonance angiography (MRA) technique. Sequential improvements of both the experimental setup used for this acquisition, as well as MR sequence development are presented. The improvements in MR acquisition and experimental setup improved the number of vessels seen in both the raw data and segmented images by 50%. An automatic vessel centerline extraction algorithm describes both vessel location and genealogy. Centerline descriptions also allows for vessel diameter and flow rate determination, providing valuable input parameters for the discrete vascular thermal model. Characterized vessels networks of both canine and bovine kidneys are presented. While these tools have been developed in an ex vivo environment, all steps can be applied to in vivo applications.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, William L; Gunderson, Jake A; Dickson, Peter M

    There has been a long history of interest in the decomposition kinetics of HMX and HMX-based formulations due to the widespread use of this explosive in high performance systems. The kinetics allow us to predict, or attempt to predict, the behavior of the explosive when subjected to thermal hazard scenarios that lead to ignition via impact, spark, friction or external heat. The latter, commonly referred to as 'cook off', has been widely studied and contemporary kinetic and transport models accurately predict time and location of ignition for simple geometries. However, there has been relatively little attention given to the problemmore » of localized ignition that results from the first three ignition sources of impact, spark and friction. The use of a zero-order single-rate expression describing the exothermic decomposition of explosives dates to the early work of Frank-Kamanetskii in the late 1930s and continued through the 60's and 70's. This expression provides very general qualitative insight, but cannot provide accurate spatial or timing details of slow cook off ignition. In the 70s, Catalano, et al., noted that single step kinetics would not accurately predict time to ignition in the one-dimensional time to explosion apparatus (ODTX). In the early 80s, Tarver and McGuire published their well-known three step kinetic expression that included an endothermic decomposition step. This scheme significantly improved the accuracy of ignition time prediction for the ODTX. However, the Tarver/McGuire model could not produce the internal temperature profiles observed in the small-scale radial experiments nor could it accurately predict the location of ignition. Those factors are suspected to significantly affect the post-ignition behavior and better models were needed. Brill, et al. noted that the enthalpy change due to the beta-delta crystal phase transition was similar to the assumed endothermic decomposition step in the Tarver/McGuire model. Henson, et al., deduced the kinetics and thermodynamics of the phase transition, providing Dickson, et al. with the information necessary to develop a four-step model that included a two-step nucleation and growth mechanism for the {beta}-{delta} phase transition. Initially, an irreversible scheme was proposed. That model accurately predicted the spatial and temporal cook off behavior of the small-scale radial experiment under slow heating conditions, but did not accurately capture the endothermic phase transition at a faster heating rate. The current version of the four-step model includes reversibility and accurately describes the small-scale radial experiment over a wide range of heating rates. We have observed impact-induced friction ignition of PBX 9501 with grit embedded between the explosive and the lower anvil surface. Observation was done using an infrared camera looking through the sapphire bottom anvil. Time to ignition and temperature-time behavior were recorded. The time to ignition was approximately 500 microseconds and the temperature was approximately 1000 K. The four step reversible kinetic scheme was previously validated for slow cook off scenarios. Our intention was to test the validity for significantly faster hot-spot processes, such as the impact-induced grit friction process studied here. We found the model predicted the ignition time within experimental error. There are caveats to consider when evaluating the agreement. The primary input to the model was friction work over an area computed by a stress analysis. The work rate itself, and the relative velocity of the grit and substrate both have a strong dependence on the initial position of the grit. Any errors in the analysis or the initial grit position would affect the model results. At this time, we do not know the sensitivity to these issues. However, the good agreement does suggest the four step kinetic scheme may have universal applicability for HMX systems.« less

  19. Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.

    PubMed

    Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra

    2016-11-20

    The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Correction of phase errors in quantitative water-fat imaging using a monopolar time-interleaved multi-echo gradient echo sequence.

    PubMed

    Ruschke, Stefan; Eggers, Holger; Kooijman, Hendrik; Diefenbach, Maximilian N; Baum, Thomas; Haase, Axel; Rummeny, Ernst J; Hu, Houchun H; Karampinos, Dimitrios C

    2017-09-01

    To propose a phase error correction scheme for monopolar time-interleaved multi-echo gradient echo water-fat imaging that allows accurate and robust complex-based quantification of the proton density fat fraction (PDFF). A three-step phase correction scheme is proposed to address a) a phase term induced by echo misalignments that can be measured with a reference scan using reversed readout polarity, b) a phase term induced by the concomitant gradient field that can be predicted from the gradient waveforms, and c) a phase offset between time-interleaved echo trains. Simulations were carried out to characterize the concomitant gradient field-induced PDFF bias and the performance estimating the phase offset between time-interleaved echo trains. Phantom experiments and in vivo liver and thigh imaging were performed to study the relevance of each of the three phase correction steps on PDFF accuracy and robustness. The simulation, phantom, and in vivo results showed in agreement with the theory an echo time-dependent PDFF bias introduced by the three phase error sources. The proposed phase correction scheme was found to provide accurate PDFF estimation independent of the employed echo time combination. Complex-based time-interleaved water-fat imaging was found to give accurate and robust PDFF measurements after applying the proposed phase error correction scheme. Magn Reson Med 78:984-996, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  1. Spin-wave utilization in a quantum computer

    NASA Astrophysics Data System (ADS)

    Khitun, A.; Ostroumov, R.; Wang, K. L.

    2001-12-01

    We propose a quantum computer scheme using spin waves for quantum-information exchange. We demonstrate that spin waves in the antiferromagnetic layer grown on silicon may be used to perform single-qubit unitary transformations together with two-qubit operations during the cycle of computation. The most attractive feature of the proposed scheme is the possibility of random access to any qubit and, consequently, the ability to recognize two qubit gates between any two distant qubits. Also, spin waves allow us to eliminate the use of a strong external magnetic field and microwave pulses. By estimate, the proposed scheme has as high as 104 ratio between quantum system coherence time and the time of a single computational step.

  2. Biomagnetic separation of Salmonella Typhimurium with high affine and specific ligand peptides isolated by phage display technique

    NASA Astrophysics Data System (ADS)

    Steingroewer, Juliane; Bley, Thomas; Bergemann, Christian; Boschke, Elke

    2007-04-01

    Analyses of food-borne pathogens are of great importance in order to minimize the health risk for customers. Thus, very sensitive and rapid detection methods are required. Current conventional culture techniques are very time consuming. Modern immunoassays and biochemical analysis also require pre-enrichment steps resulting in a turnaround time of at least 24 h. Biomagnetic separation (BMS) is a promising more rapid method. In this study we describe the isolation of high affine and specific peptides from a phage-peptide library, which combined with BMS allows the detection of Salmonella spp. with a similar sensitivity as that of immunomagnetic separation using antibodies.

  3. Time dependent deformation and stress in the lithosphere. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Yang, M.

    1980-01-01

    Efficient computer programs incorporating frontal solution and time stepping procedure were developed for the modelling of geodynamic problems. This scheme allows for investigating the quasi static phenomena including the effects of the rheological structure of a tectonically active region. From three dimensional models of strike slip earthquakes, it was found that lateral variation of viscosity affects the characteristics of surface deformations. The vertical deformation is especially informative about the viscosity structure in a strike slip fault zone. A three dimensional viscoelastic model of a thrust earthquake indicated that the transient disturbance on plate velocity due to a great plate boundary earthquake is significant at intermediate distances, but becomes barely measurable 1000 km away from the source.

  4. Solid Hydrogen Experiments for Atomic Propellants

    NASA Technical Reports Server (NTRS)

    Palaszewski, Bryan

    2001-01-01

    This paper illustrates experiments that were conducted on the formation of solid hydrogen particles in liquid helium. Solid particles of hydrogen were frozen in liquid helium, and observed with a video camera. The solid hydrogen particle sizes, their molecular structure transitions, and their agglomeration times were estimated. article sizes of 1.8 to 4.6 mm (0.07 to 0. 18 in.) were measured. The particle agglomeration times were 0.5 to 11 min, depending on the loading of particles in the dewar. These experiments are the first step toward visually characterizing these particles, and allow designers to understand what issues must be addressed in atomic propellant feed system designs for future aerospace vehicles.

  5. Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction

    PubMed Central

    Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald

    2015-01-01

    In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895

  6. Development of a Finite-Difference Time Domain (FDTD) Model for Propagation of Transient Sounds in Very Shallow Water.

    PubMed

    Sprague, Mark W; Luczkovich, Joseph J

    2016-01-01

    This finite-difference time domain (FDTD) model for sound propagation in very shallow water uses pressure and velocity grids with both 3-dimensional Cartesian and 2-dimensional cylindrical implementations. Parameters, including water and sediment properties, can vary in each dimension. Steady-state and transient signals from discrete and distributed sources, such as the surface of a vibrating pile, can be used. The cylindrical implementation uses less computation but requires axial symmetry. The Cartesian implementation allows asymmetry. FDTD calculations compare well with those of a split-step parabolic equation. Applications include modeling the propagation of individual fish sounds, fish aggregation sounds, and distributed sources.

  7. Visual Basic, Excel-based fish population modeling tool - The pallid sturgeon example

    USGS Publications Warehouse

    Moran, Edward H.; Wildhaber, Mark L.; Green, Nicholas S.; Albers, Janice L.

    2016-02-10

    The model presented in this report is a spreadsheet-based model using Visual Basic for Applications within Microsoft Excel (http://dx.doi.org/10.5066/F7057D0Z) prepared in cooperation with the U.S. Army Corps of Engineers and U.S. Fish and Wildlife Service. It uses the same model structure and, initially, parameters as used by Wildhaber and others (2015) for pallid sturgeon. The difference between the model structure used for this report and that used by Wildhaber and others (2015) is that variance is not partitioned. For the model of this report, all variance is applied at the iteration and time-step levels of the model. Wildhaber and others (2015) partition variance into parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level and temporal variance (uncertainty caused by random environmental fluctuations with time) applied at the time-step level. They included implicit individual variance (uncertainty caused by differences between individuals) within the time-step level.The interface developed for the model of this report is designed to allow the user the flexibility to change population model structure and parameter values and uncertainty separately for every component of the model. This flexibility makes the modeling tool potentially applicable to any fish species; however, the flexibility inherent in this modeling tool makes it possible for the user to obtain spurious outputs. The value and reliability of the model outputs are only as good as the model inputs. Using this modeling tool with improper or inaccurate parameter values, or for species for which the structure of the model is inappropriate, could lead to untenable management decisions. By facilitating fish population modeling, this modeling tool allows the user to evaluate a range of management options and implications. The goal of this modeling tool is to be a user-friendly modeling tool for developing fish population models useful to natural resource managers to inform their decision-making processes; however, as with all population models, caution is needed, and a full understanding of the limitations of a model and the veracity of user-supplied parameters should always be considered when using such model output in the management of any species.

  8. Robust detrending, rereferencing, outlier detection, and inpainting for multichannel data.

    PubMed

    de Cheveigné, Alain; Arzounian, Dorothée

    2018-05-15

    Electroencephalography (EEG), magnetoencephalography (MEG) and related techniques are prone to glitches, slow drift, steps, etc., that contaminate the data and interfere with the analysis and interpretation. These artifacts are usually addressed in a preprocessing phase that attempts to remove them or minimize their impact. This paper offers a set of useful techniques for this purpose: robust detrending, robust rereferencing, outlier detection, data interpolation (inpainting), step removal, and filter ringing artifact removal. These techniques provide a less wasteful alternative to discarding corrupted trials or channels, and they are relatively immune to artifacts that disrupt alternative approaches such as filtering. Robust detrending allows slow drifts and common mode signals to be factored out while avoiding the deleterious effects of glitches. Robust rereferencing reduces the impact of artifacts on the reference. Inpainting allows corrupt data to be interpolated from intact parts based on the correlation structure estimated over the intact parts. Outlier detection allows the corrupt parts to be identified. Step removal fixes the high-amplitude flux jump artifacts that are common with some MEG systems. Ringing removal allows the ringing response of the antialiasing filter to glitches (steps, pulses) to be suppressed. The performance of the methods is illustrated and evaluated using synthetic data and data from real EEG and MEG systems. These methods, which are mainly automatic and require little tuning, can greatly improve the quality of the data. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  9. A markerless system based on smartphones and webcam for the measure of step length, width and duration on treadmill.

    PubMed

    Barone, V; Verdini, F; Burattini, L; Di Nardo, F; Fioretti, S

    2016-03-01

    A markerless low cost prototype has been developed for the determination of some spatio-temporal parameters of human gait: step-length, step-width and cadence have been considered. Only a smartphone and a high-definition webcam have been used. The signals obtained by the accelerometer embedded in the smartphone are used to recognize the heel strike events, while the feet positions are calculated through image processing of the webcam stream. Step length and width are computed during gait trials on a treadmill at various speeds (3, 4 and 5 km/h). Six subjects have been tested for a total of 504 steps. Results were compared with those obtained by a stereo-photogrammetric system (Elite, BTS Engineering). The maximum average errors were 3.7 cm (5.36%) for the right step length and 1.63 cm (15.16%) for the right step width at 5 km/h. The maximum average error for step duration was 0.02 s (1.69%) at 5 km/h for the right steps. The system is characterized by a very high level of automation that allows its use by non-expert users in non-structured environments. A low cost system able to automatically provide a reliable and repeatable evaluation of some gait events and parameters during treadmill walking, is relevant also from a clinical point of view because it allows the analysis of hundreds of steps and consequently an analysis of their variability. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Permeability and kinetic coefficients for mesoscale BCF surface step dynamics: Discrete two-dimensional deposition-diffusion equation analysis

    DOE PAGES

    Zhao, Renjie; Evans, James W.; Oliveira, Tiago J.

    2016-04-08

    Here, a discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessedmore » as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate.« less

  11. Permeability and kinetic coefficients for mesoscale BCF surface step dynamics: Discrete two-dimensional deposition-diffusion equation analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Renjie; Evans, James W.; Oliveira, Tiago J.

    Here, a discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessedmore » as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate.« less

  12. Diagnosis and assessment of skeletal related disease using calcium 41

    DOEpatents

    Hillegonds, Darren J [Oakland, CA; Vogel, John S [San Jose, CA; Fitzgerald, Robert L [Encinitas, CA; Deftos, Leonard J [Del Mar, CA; Herold, David [Del Mar, CA; Burton, Douglas W [San Diego, CA

    2012-05-15

    A method of determining calcium metabolism in a patient comprises the steps of administering radioactive calcium isotope .sup.41Ca to the patient, allowing a period of time to elapse sufficient for dissemination and reaction of the radioactive calcium isotope .sup.41Ca by the patient, obtaining a sample of the radioactive calcium isotope .sup.41Ca from the patient, isolating the calcium content of the sample in a form suitable for precise measurement of isotopic calcium concentrations, and measuring the calcium content to determine parameters of calcium metabolism in the patient.

  13. Diagnosis and assessment of skeletal related disease using calcium 41

    DOEpatents

    Hillegonds, Darren J.; Vogel, John S.; Fitzgerald, Robert L.; Deftos, Leonard J.; Herold, David; Burton, Douglas W.

    2013-03-05

    A method of determining calcium metabolism in a patient comprises the steps of administering radioactive calcium isotope .sup.41Ca to the patient, allowing a period of time to elapse sufficient for dissemination and reaction of the radioactive calcium isotope .sup.41Ca by the patient, obtaining a sample of the radioactive calcium isotope .sup.41Ca from the patient, isolating the calcium content of the sample in a form suitable for precise measurement of isotopic calcium concentrations, and measuring the calcium content to determine parameters of calcium metabolism in the patient.

  14. Control of separation and quantitative analysis by GC-FTIR

    NASA Astrophysics Data System (ADS)

    Semmoud, A.; Huvenne, Jean P.; Legrand, P.

    1992-03-01

    Software for 3-D representations of the 'Absorbance-Wavenumber-Retention time' is used to control the quality of the GC separation. Spectral information given by the FTIR detection allows the user to be sure that a chromatographic peak is 'pure.' The analysis of peppermint essential oil is presented as an example. This assurance is absolutely required for quantitative applications. In these conditions, we have worked out a quantitative analysis of caffeine. Correlation coefficients between integrated absorbance measurements and concentration of caffeine are discussed at two steps of the data treatment.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, Adam J.; Scherrer, Joseph R.; Reiserer, Ronald S., E-mail: ron.reiserer@vanderbilt.edu

    We present a simple apparatus for improved surface modification of polydimethylsiloxane (PDMS) microfluidic devices. A single treatment chamber for plasma activation and chemical/physical vapor deposition steps minimizes the time-dependent degradation of surface activation that is inherent in multi-chamber techniques. Contamination and deposition irregularities are also minimized by conducting plasma activation and treatment phases in the same vacuum environment. An inductively coupled plasma driver allows for interchangeable treatment chambers. Atomic force microscopy confirms that silane deposition on PDMS gives much better surface quality than standard deposition methods, which yield a higher local roughness and pronounced irregularities in the surface.

  16. Fabrication of highly efficient ZnO nanoscintillators

    NASA Astrophysics Data System (ADS)

    Procházková, Lenka; Gbur, Tomáš; Čuba, Václav; Jarý, Vítězslav; Nikl, Martin

    2015-09-01

    Photo-induced synthesis of high-efficiency ultrafast nanoparticle scintillators of ZnO was demonstrated. Controlled doping with Ga(III) and La(III) ions together with the optimized method of ZnO synthesis and subsequent two-step annealing in air and under reducing atmosphere allow to achieve very high intensity of UV exciton luminescence, up to 750% of BGO intensity magnitude. Fabricated nanoparticles feature extremely short sub-nanosecond photoluminescence decay times. Temperature dependence of the photoluminescence spectrum within 8-340 K range was investigated and shows the absence of visible defect-related emission within all temperature intervals.

  17. Method of drying articles

    DOEpatents

    Janney, Mark A.; Kiggans, Jr., James O.

    1999-01-01

    A method of drying a green particulate article includes the steps of: a. Providing a green article which includes a particulate material and a pore phase material, the pore phase material including a solvent; and b. contacting the green article with a liquid desiccant for a period of time sufficient to remove at least a portion of the solvent from the green article, the pore phase material acting as a semipermeable barrier to allow the solvent to be sorbed into the liquid desiccant, the pore phase material substantially preventing the liquid desiccant from entering the pores.

  18. Implementing a high-fidelity simulation program in a community college setting.

    PubMed

    Tuoriniemi, Pamela; Schott-Baer, Darlene

    2008-01-01

    Despite their relatively high cost, there is heightened interest by faculty in undergraduate nursing programs to implement high-fidelity simulation (HFS) programs. High-fidelity simulators are appealing because they allow students to experience high-risk, low-volume patient problems in a realistic setting. The decision to purchase a simulator is the first step in the process of implementing and maintaining an HFS lab. Knowledge, technical skill, commitment, and considerable time are needed to develop a successful program. The process, as experienced by one community college nursing program, is described.

  19. Additional development of the XTRAN3S computer program

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.

  20. Age Nutrition Chirugie (ANC) study: impact of a geriatric intervention on the screening and management of undernutrition in elderly patients operated on for colon cancer, a stepped wedge controlled trial.

    PubMed

    Dupuis, Marine; Kuczewski, Elisabetta; Villeneuve, Laurent; Bin-Dorel, Sylvie; Haine, Max; Falandry, Claire; Gilbert, Thomas; Passot, Guillaume; Glehen, Olivier; Bonnefoy, Marc

    2017-01-07

    Undernutrition prior to major abdominal surgery is frequent and increases morbidity and mortality, especially in older patients. The management of undernutrition reduces postoperative complications. Nutritional management should be a priority in patient care during the preoperative period. However undernutrition is rarely detected and the guidelines are infrequently followed. Preoperative undernutrition screening should allow a better implementation of the guidelines. The ANC ("Age Nutrition Chirurgie") study is an interventional, comparative, prospective, multicenter, randomized protocol based on the stepped wedge trial design. For the intervention, the surgeon will inform the patient of the establishment of a systematic preoperative geriatric assessment that will allow the preoperative diagnosis of the nutritional status and the implementation of an adjusted nutritional support in accordance with the nutritional guidelines. The primary outcome measure is to determine the impact of the geriatric intervention on the level of perioperative nutritional management, in accordance with the current European guidelines. The implementation of the intervention in the five participating centers will be rolled-out sequentially over six time periods (every six months). Investigators must recommend that all patients aged 70 years or over and who are consulting for a surgery for a colorectal cancer should consider participating in this study. The ANC study is based on an original methodology, the stepped wedge trial design, which is appropriate for evaluating the implementation of a geriatric and nutritional assessment during the perioperative period. We describe the purpose of this geriatric intervention, which is expected to apply the ESPEN and SFNEP recommendations through the establishment of an undernutrition screening and a management program for patients with cancer. This intervention should allow a decrease in patient morbidity and mortality due to undernutrition. This study is registered in ClinicalTrials.gov NCT02084524 on March 11, 2014 (retrospectively registered).

Top