Chopra, Sameer; de Castro Abreu, Andre Luis; Berger, Andre K; Sehgal, Shuchi; Gill, Inderbir; Aron, Monish; Desai, Mihir M
2017-01-01
To describe our, step-by-step, technique for robotic intracorporeal neobladder formation. The main surgical steps to forming the intracorporeal orthotopic ileal neobladder are: isolation of 65 cm of small bowel; small bowel anastomosis; bowel detubularisation; suture of the posterior wall of the neobladder; neobladder-urethral anastomosis and cross folding of the pouch; and uretero-enteral anastomosis. Improvements have been made to these steps to enhance time efficiency without compromising neobladder configuration. Our technical improvements have resulted in an improvement in operative time from 450 to 360 min. We describe an updated step-by-step technique of robot-assisted intracorporeal orthotopic ileal neobladder formation. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.
Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems
NASA Astrophysics Data System (ADS)
Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo
With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.
NASA Astrophysics Data System (ADS)
Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin
2017-12-01
The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.
Facilities Planning for Small Colleges.
ERIC Educational Resources Information Center
O'Neill, Joseph P.; And Others
This second publication in a three-part series called "Alternative Futures" is essentially a workbook that, followed step by step, allows a college to see how its use of space has changed over time. Especially designed for small colleges, the kit makes use of the information that is routinely collected, such as annual financial statements and…
Frazier, Zachary
2012-01-01
Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237
Perception of linear acceleration in weightlessness
NASA Technical Reports Server (NTRS)
Arrott, A. P.; Young, L. R.
1987-01-01
Eye movements and subjective detection of acceleration were measured on human experimental subjects during vestibular sled acceleration during the D1 Spacelab Mission. Methods and results are reported on the time to detection of small acceleration steps, the threshold for detection of linear acceleration, perceived motion path, and CLOAT. A consistently shorter time to detection of small acceleration steps is found. Subjective reports of perceived motion during sinusoidal oscillation in weightlessness were qualitatively similar to reports on earth.
Finite Memory Walk and Its Application to Small-World Network
NASA Astrophysics Data System (ADS)
Oshima, Hiraku; Odagaki, Takashi
2012-07-01
In order to investigate the effects of cycles on the dynamical process on both regular lattices and complex networks, we introduce a finite memory walk (FMW) as an extension of the simple random walk (SRW), in which a walker is prohibited from moving to sites visited during m steps just before the current position. This walk interpolates the simple random walk (SRW), which has no memory (m = 0), and the self-avoiding walk (SAW), which has an infinite memory (m = ∞). We investigate the FMW on regular lattices and clarify the fundamental characteristics of the walk. We find that (1) the mean-square displacement (MSD) of the FMW shows a crossover from the SAW at a short time step to the SRW at a long time step, and the crossover time is approximately equivalent to the number of steps remembered, and that the MSD can be rescaled in terms of the time step and the size of memory; (2) the mean first-return time (MFRT) of the FMW changes significantly at the number of remembered steps that corresponds to the size of the smallest cycle in the regular lattice, where ``smallest'' indicates that the size of the cycle is the smallest in the network; (3) the relaxation time of the first-return time distribution (FRTD) decreases as the number of cycles increases. We also investigate the FMW on the Watts--Strogatz networks that can generate small-world networks, and show that the clustering coefficient of the Watts--Strogatz network is strongly related to the MFRT of the FMW that can remember two steps.
The Relaxation of Vicinal (001) with ZigZag [110] Steps
NASA Astrophysics Data System (ADS)
Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.
2012-02-01
This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.
Fractal analysis of lateral movement in biomembranes.
Gmachowski, Lech
2018-04-01
Lateral movement of a molecule in a biomembrane containing small compartments (0.23-μm diameter) and large ones (0.75 μm) is analyzed using a fractal description of its walk. The early time dependence of the mean square displacement varies from linear due to the contribution of ballistic motion. In small compartments, walking molecules do not have sufficient time or space to develop an asymptotic relation and the diffusion coefficient deduced from the experimental records is lower than that measured without restrictions. The model makes it possible to deduce the molecule step parameters, namely the step length and time, from data concerning confined and unrestricted diffusion coefficients. This is also possible using experimental results for sub-diffusive transport. The transition from normal to anomalous diffusion does not affect the molecule step parameters. The experimental literature data on molecular trajectories recorded at a high time resolution appear to confirm the modeled value of the mean free path length of DOPE for Brownian and anomalous diffusion. Although the step length and time give the proper values of diffusion coefficient, the DOPE speed calculated as their quotient is several orders of magnitude lower than the thermal speed. This is interpreted as a result of intermolecular interactions, as confirmed by lateral diffusion of other molecules in different membranes. The molecule step parameters are then utilized to analyze the problem of multiple visits in small compartments. The modeling of the diffusion exponent results in a smooth transition to normal diffusion on entering a large compartment, as observed in experiments.
One Step at a Time: Using Task Analyses to Teach Skills
ERIC Educational Resources Information Center
Snodgrass, Melinda R.; Meadan, Hedda; Ostrosky, Michaelene M.; Cheung, W. Catherine
2017-01-01
Task analyses are useful when teaching children how to complete tasks by breaking the tasks into small steps, particularly when children struggle to learn a skill during typical classroom instruction. We describe how to create a task analysis by identifying the steps a child needs to independently perform the task, how to assess what steps a child…
Antibody-Mediated Small Molecule Detection Using Programmable DNA-Switches.
Rossetti, Marianna; Ippodrino, Rudy; Marini, Bruna; Palleschi, Giuseppe; Porchetta, Alessandro
2018-06-13
The development of rapid, cost-effective, and single-step methods for the detection of small molecules is crucial for improving the quality and efficiency of many applications ranging from life science to environmental analysis. Unfortunately, current methodologies still require multiple complex, time-consuming washing and incubation steps, which limit their applicability. In this work we present a competitive DNA-based platform that makes use of both programmable DNA-switches and antibodies to detect small target molecules. The strategy exploits both the advantages of proximity-based methods and structure-switching DNA-probes. The platform is modular and versatile and it can potentially be applied for the detection of any small target molecule that can be conjugated to a nucleic acid sequence. Here the rational design of programmable DNA-switches is discussed, and the sensitive, rapid, and single-step detection of different environmentally relevant small target molecules is demonstrated.
Two Independent Contributions to Step Variability during Over-Ground Human Walking
Collins, Steven H.; Kuo, Arthur D.
2013-01-01
Human walking exhibits small variations in both step length and step width, some of which may be related to active balance control. Lateral balance is thought to require integrative sensorimotor control through adjustment of step width rather than length, contributing to greater variability in step width. Here we propose that step length variations are largely explained by the typical human preference for step length to increase with walking speed, which itself normally exhibits some slow and spontaneous fluctuation. In contrast, step width variations should have little relation to speed if they are produced more for lateral balance. As a test, we examined hundreds of overground walking steps by healthy young adults (N = 14, age < 40 yrs.). We found that slow fluctuations in self-selected walking speed (2.3% coefficient of variation) could explain most of the variance in step length (59%, P < 0.01). The residual variability not explained by speed was small (1.5% coefficient of variation), suggesting that step length is actually quite precise if not for the slow speed fluctuations. Step width varied over faster time scales and was independent of speed fluctuations, with variance 4.3 times greater than that for step length (P < 0.01) after accounting for the speed effect. That difference was further magnified by walking with eyes closed, which appears detrimental to control of lateral balance. Humans appear to modulate fore-aft foot placement in precise accordance with slow fluctuations in walking speed, whereas the variability of lateral foot placement appears more closely related to balance. Step variability is separable in both direction and time scale into balance- and speed-related components. The separation of factors not related to balance may reveal which aspects of walking are most critical for the nervous system to control. PMID:24015308
Mass imbalances in EPANET water-quality simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Michael J.; Janke, Robert; Taxon, Thomas N.
EPANET is widely employed to simulate water quality in water distribution systems. However, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results, in general, only for small water-quality time steps; use of an adequately short time step may not be feasible. Overly long time steps can yield errors in concentrations and result in situations in which constituent mass is not conserved. Mass may not be conserved even when EPANET gives no errors or warnings. This paper explains how such imbalances can occur and provides examples of such cases; it also presents a preliminary event-driven approachmore » that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, to those obtained using the new approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations.« less
Diffractive optics fabricated by direct write methods with an electron beam
NASA Technical Reports Server (NTRS)
Kress, Bernard; Zaleta, David; Daschner, Walter; Urquhart, Kris; Stein, Robert; Lee, Sing H.
1993-01-01
State-of-the-art diffractive optics are fabricated using e-beam lithography and dry etching techniques to achieve multilevel phase elements with very high diffraction efficiencies. One of the major challenges encountered in fabricating diffractive optics is the small feature size (e.g. for diffractive lenses with small f-number). It is not only the e-beam system which dictates the feature size limitations, but also the alignment systems (mask aligner) and the materials (e-beam and photo resists). In order to allow diffractive optics to be used in new optoelectronic systems, it is necessary not only to fabricate elements with small feature sizes but also to do so in an economical fashion. Since price of a multilevel diffractive optical element is closely related to the e-beam writing time and the number of etching steps, we need to decrease the writing time and etching steps without affecting the quality of the element. To do this one has to utilize the full potentials of the e-beam writing system. In this paper, we will present three diffractive optics fabrication techniques which will reduce the number of process steps, the writing time, and the overall fabrication time for multilevel phase diffractive optics.
Suggestions for CAP-TSD mesh and time-step input parameters
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1991-01-01
Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.
Optimal variable-grid finite-difference modeling for porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Li, Haishan
2014-12-01
Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.
NASA Astrophysics Data System (ADS)
Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.
2016-01-01
The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e., the observation that large-scale rainfall structures are more persistent and predictable than small-scale convective cells. This paper presents the development, adaptation and verification of the STEPS system for Belgium (STEPS-BE). STEPS-BE provides in real-time 20-member ensemble precipitation nowcasts at 1 km and 5 min resolutions up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 75-90 % of the forecast errors.
NASA Astrophysics Data System (ADS)
Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.
2015-07-01
The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e. the observation that large scale rainfall structures are more persistent and predictable than small scale convective cells. This paper presents the development, adaptation and verification of the system STEPS for Belgium (STEPS-BE). STEPS-BE provides in real-time 20 member ensemble precipitation nowcasts at 1 km and 5 min resolution up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 80-90 % of the forecast errors.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Personal computer study of finite-difference methods for the transonic small disturbance equation
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1989-01-01
Calculation of unsteady flow phenomena requires careful attention to the numerical treatment of the governing partial differential equations. The personal computer provides a convenient and useful tool for the development of meshes, algorithms, and boundary conditions needed to provide time accurate solution of these equations. The one-dimensional equation considered provides a suitable model for the study of wave propagation in the equations of transonic small disturbance potential flow. Numerical results for effects of mesh size, extent, and stretching, time step size, and choice of far-field boundary conditions are presented. Analysis of the discretized model problem supports these numerical results. Guidelines for suitable mesh and time step choices are given.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
Computing Flow through Well Screens Using an Embedded Well Technique
2015-08-01
average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...necessary to solve the continuity equation and the momentum equation using small time - steps . With the assumption that the well flow reaches...well system so that much greater time - steps can be used for computation. The 1D steady- state well equation can be written as well well well well well
NASA Astrophysics Data System (ADS)
Lafitte, Pauline; Melis, Ward; Samaey, Giovanni
2017-07-01
We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
A local time stepping algorithm for GPU-accelerated 2D shallow water models
NASA Astrophysics Data System (ADS)
Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo
2018-01-01
In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.
Chen, Chi-Kan
2017-07-26
The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.
Biomechanical influences on balance recovery by stepping.
Hsiao, E T; Robinovitch, S N
1999-10-01
Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.
An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm
NASA Astrophysics Data System (ADS)
Chen, G.; Chacón, L.; Barnes, D. C.
2011-08-01
This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.
Exact charge and energy conservation in implicit PIC with mapped computational meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Barnes, D. C.
This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less
Severns, Paul M.
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190
Breed, Greg A; Severns, Paul M
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.
Computation of Acoustic Waves Through Sliding-Zone Interfaces Using an Euler/Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
1996-01-01
The effect of a patched sliding-zone interface on the transmission of acoustic waves is examined for two- and three-dimensional model problems. A simple but general interpolation scheme at the patched boundary passes acoustic waves without distortion, provided that a sufficiently small time step is taken. A guideline is provided for the maximum permissible time step or zone speed that gives an acceptable error introduced by the sliding-zone interface.
NASA Astrophysics Data System (ADS)
Aronoff, H. I.; Leslie, J. J.; Mittleman, A. N.; Holt, S.
1983-11-01
This manual describes a Shared Time Engineering Program (STEP) conducted by the New England Apparel Manufacturers Association (NEAMA) headquartered in Fall River Massachusetts, and funded by the Office of Trade Adjustment Assistance of the U.S. Department of Commerce. It is addressed to industry association executives, industrial engineers and others interested in examining an innovative model of industrial engineering assistance to small plants which might be adapted to their particular needs.
Means of determining extrusion temperatures
McDonald, Robert E.; Canonico, Domenic A.
1977-01-01
In an extrusion process comprising the steps of fabricating a metal billet, heating said billet for a predetermined time and at a selected temperature to increase its plasticity and then forcing said heated billet through a small orifice to produce a desired extruded object, the improvement comprising the steps of randomly inserting a plurality of small metallic thermal tabs at different cross sectional depths in said billet as a part of said fabricating step, and examining said extruded object at each thermal tab location for determining the crystal structure at each extruded thermal tab thus revealing the maximum temperature reached during extrusion in each respective tab location section of the extruded object, whereby the thermal profile of said extruded object during extrusion may be determined.
NASA Technical Reports Server (NTRS)
Turpin, Jason B.
2004-01-01
One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.
Fortran programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap
NASA Astrophysics Data System (ADS)
Muruganandam, P.; Adhikari, S. K.
2009-10-01
Here we develop simple numerical algorithms for both stationary and non-stationary solutions of the time-dependent Gross-Pitaevskii (GP) equation describing the properties of Bose-Einstein condensates at ultra low temperatures. In particular, we consider algorithms involving real- and imaginary-time propagation based on a split-step Crank-Nicolson method. In a one-space-variable form of the GP equation we consider the one-dimensional, two-dimensional circularly-symmetric, and the three-dimensional spherically-symmetric harmonic-oscillator traps. In the two-space-variable form we consider the GP equation in two-dimensional anisotropic and three-dimensional axially-symmetric traps. The fully-anisotropic three-dimensional GP equation is also considered. Numerical results for the chemical potential and root-mean-square size of stationary states are reported using imaginary-time propagation programs for all the cases and compared with previously obtained results. Also presented are numerical results of non-stationary oscillation for different trap symmetries using real-time propagation programs. A set of convenient working codes developed in Fortran 77 are also provided for all these cases (twelve programs in all). In the case of two or three space variables, Fortran 90/95 versions provide some simplification over the Fortran 77 programs, and these programs are also included (six programs in all). Program summaryProgram title: (i) imagetime1d, (ii) imagetime2d, (iii) imagetime3d, (iv) imagetimecir, (v) imagetimesph, (vi) imagetimeaxial, (vii) realtime1d, (viii) realtime2d, (ix) realtime3d, (x) realtimecir, (xi) realtimesph, (xii) realtimeaxial Catalogue identifier: AEDU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 122 907 No. of bytes in distributed program, including test data, etc.: 609 662 Distribution format: tar.gz Programming language: FORTRAN 77 and Fortran 90/95 Computer: PC Operating system: Linux, Unix RAM: 1 GByte (i, iv, v), 2 GByte (ii, vi, vii, x, xi), 4 GByte (iii, viii, xii), 8 GByte (ix) Classification: 2.9, 4.3, 4.12 Nature of problem: These programs are designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-, two- or three-space dimensions with a harmonic, circularly-symmetric, spherically-symmetric, axially-symmetric or anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Solution method: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation, in either imaginary or real time, over small time steps. The method yields the solution of stationary and/or non-stationary problems. Additional comments: This package consists of 12 programs, see "Program title", above. FORTRAN77 versions are provided for each of the 12 and, in addition, Fortran 90/95 versions are included for ii, iii, vi, viii, ix, xii. For the particular purpose of each program please see the below. Running time: Minutes on a medium PC (i, iv, v, vii, x, xi), a few hours on a medium PC (ii, vi, viii, xii), days on a medium PC (iii, ix). Program summary (1)Title of program: imagtime1d.F Title of electronic file: imagtime1d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-space dimension with a harmonic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (2)Title of program: imagtimecir.F Title of electronic file: imagtimecir.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with a circularly-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (3)Title of program: imagtimesph.F Title of electronic file: imagtimesph.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with a spherically-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (4)Title of program: realtime1d.F Title of electronic file: realtime1d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-space dimension with a harmonic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (5)Title of program: realtimecir.F Title of electronic file: realtimecir.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with a circularly-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (6)Title of program: realtimesph.F Title of electronic file: realtimesph.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with a spherically-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (7)Title of programs: imagtimeaxial.F and imagtimeaxial.f90 Title of electronic file: imagtimeaxial.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an axially-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (8)Title of program: imagtime2d.F and imagtime2d.f90 Title of electronic file: imagtime2d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (9)Title of program: realtimeaxial.F and realtimeaxial.f90 Title of electronic file: realtimeaxial.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time Hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an axially-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (10)Title of program: realtime2d.F and realtime2d.f90 Title of electronic file: realtime2d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (11)Title of program: imagtime3d.F and imagtime3d.f90 Title of electronic file: imagtime3d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few days on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (12)Title of program: realtime3d.F and realtime3d.f90 Title of electronic file: realtime3d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum Ram Memory: 8 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Days on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems.
Visualization of time-varying MRI data for MS lesion analysis
NASA Astrophysics Data System (ADS)
Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella
2001-05-01
Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
Quantization improves stabilization of dynamical systems with delayed feedback
NASA Astrophysics Data System (ADS)
Stepan, Gabor; Milton, John G.; Insperger, Tamas
2017-11-01
We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.
Angular measurements of the dynein ring reveal a stepping mechanism dependent on a flexible stalk
Lippert, Lisa G.; Dadosh, Tali; Hadden, Jodi A.; Karnawat, Vishakha; Diroll, Benjamin T.; Murray, Christopher B.; Holzbaur, Erika L. F.; Schulten, Klaus; Reck-Peterson, Samara L.; Goldman, Yale E.
2017-01-01
The force-generating mechanism of dynein differs from the force-generating mechanisms of other cytoskeletal motors. To examine the structural dynamics of dynein’s stepping mechanism in real time, we used polarized total internal reflection fluorescence microscopy with nanometer accuracy localization to track the orientation and position of single motors. By measuring the polarized emission of individual quantum nanorods coupled to the dynein ring, we determined the angular position of the ring and found that it rotates relative to the microtubule (MT) while walking. Surprisingly, the observed rotations were small, averaging only 8.3°, and were only weakly correlated with steps. Measurements at two independent labeling positions on opposite sides of the ring showed similar small rotations. Our results are inconsistent with a classic power-stroke mechanism, and instead support a flexible stalk model in which interhead strain rotates the rings through bending and hinging of the stalk. Mechanical compliances of the stalk and hinge determined based on a 3.3-μs molecular dynamics simulation account for the degree of ring rotation observed experimentally. Together, these observations demonstrate that the stepping mechanism of dynein is fundamentally different from the stepping mechanisms of other well-studied MT motors, because it is characterized by constant small-scale fluctuations of a large but flexible structure fully consistent with the variable stepping pattern observed as dynein moves along the MT. PMID:28533393
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Charles
University Park, Maryland (“UP”) is a small town of 2,540 residents, 919 homes, 2 churches, 1 school, 1 town hall, and 1 breakthrough community energy efficiency initiative: the Small Town Energy Program (“STEP”). STEP was developed with a mission to “create a model community energy transformation program that serves as a roadmap for other small towns across the U.S.” STEP first launched in January 2011 in UP and expanded in July 2012 to the neighboring communities of Hyattsville, Riverdale Park, and College Heights Estates, MD. STEP, which concluded in July 2013, was generously supported by a grant from the U.S.more » Department of Energy (DOE). The STEP model was designed for replication in other resource-constrained small towns similar to University Park - a sector largely neglected to date in federal and state energy efficiency programs. STEP provided a full suite of activities for replication, including: energy audits and retrofits for residential buildings, financial incentives, a community-based social marketing backbone and local community delivery partners. STEP also included the highly innovative use of an “Energy Coach” who worked one-on-one with clients throughout the program. Please see www.smalltownenergy.org for more information. In less than three years, STEP achieved the following results in University Park: • 30% of community households participated voluntarily in STEP; • 25% of homes received a Home Performance with ENERGY STAR assessment; • 16% of households made energy efficiency improvements to their home; • 64% of households proceeded with an upgrade after their assessment; • 9 Full Time Equivalent jobs were created or retained, and 39 contractors worked on STEP over the course of the project. Estimated Energy Savings - Program Totals kWh Electricity 204,407 Therms Natural Gas 24,800 Gallons of Oil 2,581 Total Estimated MMBTU Saved (Source Energy) 5,474 Total Estimated Annual Energy Cost Savings $61,343 STEP clients who had a home energy upgrade invested on average $4,500, resulting in a 13% reduction in annual energy use and utility bill savings of $325. Rebates and incentives covered 40%-50% of retrofit cost, resulting in an average simple payback of about 7 years. STEP has created a handbook in which are assembled all the key elements that went into the design and delivery of STEP. The target audiences for the handbook include interested citizens, elected officials and municipal staff who want to establish and run their own efficiency program within a small community or neighborhood, using elements, materials and lessons from STEP.« less
Real-Time, Single-Step Bioassay Using Nanoplasmonic Resonator With Ultra-High Sensitivity
NASA Technical Reports Server (NTRS)
Zhang, Xiang (Inventor); Chen, Fanqing Frank (Inventor); Su, Kai-Hang (Inventor); Wei, Qi-Huo (Inventor); Ellman, Jonathan A. (Inventor); Sun, Cheng (Inventor)
2014-01-01
A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.
Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity
Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng
2014-04-01
A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.
Multigrid solution of compressible turbulent flow on unstructured meshes using a two-equation model
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Matinelli, L.
1994-01-01
The steady state solution of the system of equations consisting of the full Navier-Stokes equations and two turbulence equations has been obtained using a multigrid strategy of unstructured meshes. The flow equations and turbulence equations are solved in a loosely coupled manner. The flow equations are advanced in time using a multistage Runge-Kutta time-stepping scheme with a stability-bound local time step, while turbulence equations are advanced in a point-implicit scheme with a time step which guarantees stability and positivity. Low-Reynolds-number modifications to the original two-equation model are incorporated in a manner which results in well-behaved equations for arbitrarily small wall distances. A variety of aerodynamic flows are solved, initializing all quantities with uniform freestream values. Rapid and uniform convergence rates for the flow and turbulence equations are observed.
Imaging workflow and calibration for CT-guided time-domain fluorescence tomography
Tichauer, Kenneth M.; Holt, Robert W.; El-Ghussein, Fadi; Zhu, Qun; Dehghani, Hamid; Leblond, Frederic; Pogue, Brian W.
2011-01-01
In this study, several key optimization steps are outlined for a non-contact, time-correlated single photon counting small animal optical tomography system, using simultaneous collection of both fluorescence and transmittance data. The system is presented for time-domain image reconstruction in vivo, illustrating the sensitivity from single photon counting and the calibration steps needed to accurately process the data. In particular, laser time- and amplitude-referencing, detector and filter calibrations, and collection of a suitable instrument response function are all presented in the context of time-domain fluorescence tomography and a fully automated workflow is described. Preliminary phantom time-domain reconstructed images demonstrate the fidelity of the workflow for fluorescence tomography based on signal from multiple time gates. PMID:22076264
Analysis of real-time numerical integration methods applied to dynamic clamp experiments.
Butera, Robert J; McCarthy, Maeve L
2004-12-01
Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.
NASA Astrophysics Data System (ADS)
Yu, Hung Wei; Anandan, Deepak; Hsu, Ching Yi; Hung, Yu Chih; Su, Chun Jung; Wu, Chien Ting; Kakkerla, Ramesh Kumar; Ha, Minh Thien Huu; Huynh, Sa Hoang; Tu, Yung Yi; Chang, Edward Yi
2018-02-01
High-density (˜ 80/um2) vertical InAs nanowires (NWs) with small diameters (˜ 28 nm) were grown on bare Si (111) substrates by means of two-step metal organic chemical vapor deposition. There are two critical factors in the growth process: (1) a critical nucleation temperature for a specific In molar fraction (approximately 1.69 × 10-5 atm) is the key factor to reduce the size of the nuclei and hence the diameter of the InAs NWs, and (2) a critical V/III ratio during the 2nd step growth will greatly increase the density of the InAs NWs (from 45 μm-2 to 80 μm-2) and at the same time keep the diameter small. The high-resolution transmission electron microscopy and selected area diffraction patterns of InAs NWs grown on Si exhibit a Wurtzite structure and no stacking faults. The observed longitudinal optic peaks in the Raman spectra were explained in terms of the small surface charge region width due to the small NW diameter and the increase of the free electron concentration, which was consistent with the TCAD program simulation of small diameter (< 40 nm) InAs NWs.
Tests of Lead-bronze Bearings in the DVL Bearing-testing Machine
NASA Technical Reports Server (NTRS)
Fischer, G
1940-01-01
The lead-bronze bearings tested in the DVL machine have proven themselves very sensitive to load changes as in comparison with bearings of light metal. In order to prevent surface injuries and consequently running interruptions, the increase of the load has to be made in small steps with sufficient run-in time between steps. The absence of lead in the running surface, impurities in the alloy (especially iron) and surface irregularities (pores) decreases the load-carrying capacity of the bearing to two or three times that of the static load.
Robust double gain unscented Kalman filter for small satellite attitude estimation
NASA Astrophysics Data System (ADS)
Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun
2017-08-01
Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).
Modeling Epidemics with Dynamic Small-World Networks
NASA Astrophysics Data System (ADS)
Kaski, Kimmo; Saramäki, Jari
2005-06-01
In this presentation a minimal model for describing the spreading of an infectious disease, such as influenza, is discussed. Here it is assumed that spreading takes place on a dynamic small-world network comprising short- and long-range infection events. Approximate equations for the epidemic threshold as well as the spreading dynamics are derived and they agree well with numerical discrete time-step simulations. Also the dependence of the epidemic saturation time on the initial conditions is analysed and a comparison with real-world data is made.
"Small Steps, Big Rewards": Preventing Type 2 Diabetes
... please turn Javascript on. Feature: Diabetes "Small Steps, Big Rewards": Preventing Type 2 Diabetes Past Issues / Fall ... These are the plain facts in "Small Steps. Big Rewards: Prevent Type 2 Diabetes," an education campaign ...
Re-Organizing Earth Observation Data Storage to Support Temporal Analysis of Big Data
NASA Technical Reports Server (NTRS)
Lynnes, Christopher
2017-01-01
The Earth Observing System Data and Information System archives many datasets that are critical to understanding long-term variations in Earth science properties. Thus, some of these are large, multi-decadal datasets. Yet the challenge in long time series analysis comes less from the sheer volume than the data organization, which is typically one (or a small number of) time steps per file. The overhead of opening and inventorying complex, API-driven data formats such as Hierarchical Data Format introduces a small latency at each time step, which nonetheless adds up for datasets with O(10^6) single-timestep files. Several approaches to reorganizing the data can mitigate this overhead by an order of magnitude: pre-aggregating data along the time axis (time-chunking); storing the data in a highly distributed file system; or storing data in distributed columnar databases. Storing a second copy of the data incurs extra costs, so some selection criteria must be employed, which would be driven by expected or actual usage by the end user community, balanced against the extra cost.
Re-organizing Earth Observation Data Storage to Support Temporal Analysis of Big Data
NASA Astrophysics Data System (ADS)
Lynnes, C.
2017-12-01
The Earth Observing System Data and Information System archives many datasets that are critical to understanding long-term variations in Earth science properties. Thus, some of these are large, multi-decadal datasets. Yet the challenge in long time series analysis comes less from the sheer volume than the data organization, which is typically one (or a small number of) time steps per file. The overhead of opening and inventorying complex, API-driven data formats such as Hierarchical Data Format introduces a small latency at each time step, which nonetheless adds up for datasets with O(10^6) single-timestep files. Several approaches to reorganizing the data can mitigate this overhead by an order of magnitude: pre-aggregating data along the time axis (time-chunking); storing the data in a highly distributed file system; or storing data in distributed columnar databases. Storing a second copy of the data incurs extra costs, so some selection criteria must be employed, which would be driven by expected or actual usage by the end user community, balanced against the extra cost.
NASA Astrophysics Data System (ADS)
Sangwal, K.; Torrent-Burgues, J.; Sanz, F.; Gorostiza, P.
1997-02-01
The experimental results of the formation of step bunches and macrosteps on the {100} face of L-arginine phosphate monohydrate crystals grown from aqueous solutions at different supersaturations studied by using atomic force microscopy are described and discussed. It was observed that (1) the step height does not remain constant with increasing time but fluctuates within a particular range of heights, which depends on the region of step bunches, (2) the maximum height and the slope of bunched steps increases with growth time as well as supersaturation used for growth, and that (3) the slope of steps of relatively small heights is usually low with a value of about 8° and does not depend on the region of formation of step bunches, but the slope of steps of large heights is up to 21°. Analysis of the experimental results showed that (1) at a particular value of supersaturation the ratio of the average step height to the average step spacing is a constant, suggesting that growth of the {100} face of L-arginine phosphate monohydrate crystals occurs by direct integration of growth entities to growth steps, and that (2) the formation of step bunches and macrosteps follows the dynamic theory of faceting, advanced by Vlachos et al.
NASA Astrophysics Data System (ADS)
Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.
2018-02-01
The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.
Multigrid solution of compressible turbulent flow on unstructured meshes using a two-equation model
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Martinelli, L.
1991-01-01
The system of equations consisting of the full Navier-Stokes equations and two turbulence equations was solved for in the steady state using a multigrid strategy on unstructured meshes. The flow equations and turbulence equations are solved in a loosely coupled manner. The flow equations are advanced in time using a multistage Runge-Kutta time stepping scheme with a stability bound local time step, while the turbulence equations are advanced in a point-implicit scheme with a time step which guarantees stability and positively. Low Reynolds number modifications to the original two equation model are incorporated in a manner which results in well behaved equations for arbitrarily small wall distances. A variety of aerodynamic flows are solved for, initializing all quantities with uniform freestream values, and resulting in rapid and uniform convergence rates for the flow and turbulence equations.
"Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes
... Home Current Issue Past Issues Special Section "Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes ... onset. Those are the basic facts of "Small Steps. Big Rewards: Prevent type 2 Diabetes," created by ...
"It's Small Steps, but That Leads to Bigger Changes": Evaluation of a Nurture Group Intervention
ERIC Educational Resources Information Center
Vincent, Kerry
2017-01-01
This article presents the results of a small-scale research project that aimed to evaluate the effectiveness of a part-time nurture group recently established in one primary school. Qualitative interviews were used to gather staff, pupil and parental perceptions about the nurture group. These focused not only on what difference the nurture group…
Kang, Hyunook; Yun, Hoyeol; Lee, Sang Wook; Yeo, Woon-Seok
2017-06-01
We report a method of small molecule analysis using a converted graphene-like monolayer (CGM) plate and laser desorption/ionization time-of-flight mass spectrometry (LDI-TOF MS) without organic matrices. The CGM plate was prepared from self-assembled monolayers of biphenyl-4-thiol on gold using electron beam irradiation followed by an annealing step. The above plate was utilized for the LDI-TOF MS analyses of various small molecules and their mixtures, e.g., amino acids, sugars, fatty acids, oligoethylene glycols, and flavonoids. The CGM plate afforded high signal-to-noise ratios, good limits of detection (1pmol to 10fmol), and reusability for up to 30 cycles. As a practical application, the enzymatic activity of the cytochrome P450 2A6 (CYP2A6) enzyme in human liver microsomes was assessed in the 7-hydroxylation of coumarin using the CGM plate without other purification steps. We believe that the prepared CGM plate can be practically used with the advantages of simplicity, sensitivity, and reusability for the matrix-free analysis of small biomolecules. Copyright © 2017 Elsevier B.V. All rights reserved.
Mutational Effects and Population Dynamics During Viral Adaptation Challenge Current Models
Miller, Craig R.; Joyce, Paul; Wichman, Holly A.
2011-01-01
Adaptation in haploid organisms has been extensively modeled but little tested. Using a microvirid bacteriophage (ID11), we conducted serial passage adaptations at two bottleneck sizes (104 and 106), followed by fitness assays and whole-genome sequencing of 631 individual isolates. Extensive genetic variation was observed including 22 beneficial, several nearly neutral, and several deleterious mutations. In the three large bottleneck lines, up to eight different haplotypes were observed in samples of 23 genomes from the final time point. The small bottleneck lines were less diverse. The small bottleneck lines appeared to operate near the transition between isolated selective sweeps and conditions of complex dynamics (e.g., clonal interference). The large bottleneck lines exhibited extensive interference and less stochasticity, with multiple beneficial mutations establishing on a variety of backgrounds. Several leapfrog events occurred. The distribution of first-step adaptive mutations differed significantly from the distribution of second-steps, and a surprisingly large number of second-step beneficial mutations were observed on a highly fit first-step background. Furthermore, few first-step mutations appeared as second-steps and second-steps had substantially smaller selection coefficients. Collectively, the results indicate that the fitness landscape falls between the extremes of smooth and fully uncorrelated, violating the assumptions of many current mutational landscape models. PMID:21041559
Small Business Lending Enhancement Act of 2012
Sen. Udall, Mark [D-CO
2012-03-22
Senate - 03/26/2012 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 340. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
Small Company Capital Formation Act of 2011
Rep. Schweikert, David [R-AZ-5
2011-03-14
Senate - 11/07/2011 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 222. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
America's Small Business Tax Relief Act of 2014
Rep. Tiberi, Patrick J. [R-OH-12
2014-04-10
Senate - 06/17/2014 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 434. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Asynchronous adaptive time step in quantitative cellular automata modeling
Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan
2004-01-01
Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
Red Tape Reduction and Small Business Job Creation Act
Rep. Griffin, Tim [R-AR-2
2012-02-17
Senate - 07/31/2012 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 477. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Programmable 10 MHz optical fiducial system for hydrodiagnostic cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huen, T.
1987-07-01
A solid state light control system was designed and fabricated for use with hydrodiagnostic streak cameras of the electro-optic type. With its use, the film containing the streak images will have on it two time scales simultaneously exposed with the signal. This allows timing and cross timing. The latter is achieved with exposure modulation marking onto the time tick marks. The purpose of using two time scales will be discussed. The design is based on a microcomputer, resulting in a compact and easy to use instrument. The light source is a small red light emitting diode. Time marking can bemore » programmed in steps of 0.1 microseconds, with a range of 255 steps. The time accuracy is based on a precision 100 MHz quartz crystal, giving a divided down 10 MHz system frequency. The light is guided by two small 100 micron diameter optical fibers, which facilitates light coupling onto the input slit of an electro-optic streak camera. Three distinct groups of exposure modulation of the time tick marks can be independently set anywhere onto the streak duration. This system has been successfully used in Fabry-Perot laser velocimeters for over four years in our Laboratory. The microcomputer control section is also being used in providing optical fids to mechanical rotor cameras.« less
High mobility high efficiency organic films based on pure organic materials
Salzman, Rhonda F [Ann Arbor, MI; Forrest, Stephen R [Ann Arbor, MI
2009-01-27
A method of purifying small molecule organic material, performed as a series of operations beginning with a first sample of the organic small molecule material. The first step is to purify the organic small molecule material by thermal gradient sublimation. The second step is to test the purity of at least one sample from the purified organic small molecule material by spectroscopy. The third step is to repeat the first through third steps on the purified small molecule material if the spectroscopic testing reveals any peaks exceeding a threshold percentage of a magnitude of a characteristic peak of a target organic small molecule. The steps are performed at least twice. The threshold percentage is at most 10%. Preferably the threshold percentage is 5% and more preferably 2%. The threshold percentage may be selected based on the spectra of past samples that achieved target performance characteristics in finished devices.
Finite-difference modeling with variable grid-size and adaptive time-step in porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Wu, Guochen
2014-04-01
Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.
Fast time- and frequency-domain finite-element methods for electromagnetic analysis
NASA Astrophysics Data System (ADS)
Lee, Woochan
Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perry, William L; Gunderson, Jake A; Dickson, Peter M
There has been a long history of interest in the decomposition kinetics of HMX and HMX-based formulations due to the widespread use of this explosive in high performance systems. The kinetics allow us to predict, or attempt to predict, the behavior of the explosive when subjected to thermal hazard scenarios that lead to ignition via impact, spark, friction or external heat. The latter, commonly referred to as 'cook off', has been widely studied and contemporary kinetic and transport models accurately predict time and location of ignition for simple geometries. However, there has been relatively little attention given to the problemmore » of localized ignition that results from the first three ignition sources of impact, spark and friction. The use of a zero-order single-rate expression describing the exothermic decomposition of explosives dates to the early work of Frank-Kamanetskii in the late 1930s and continued through the 60's and 70's. This expression provides very general qualitative insight, but cannot provide accurate spatial or timing details of slow cook off ignition. In the 70s, Catalano, et al., noted that single step kinetics would not accurately predict time to ignition in the one-dimensional time to explosion apparatus (ODTX). In the early 80s, Tarver and McGuire published their well-known three step kinetic expression that included an endothermic decomposition step. This scheme significantly improved the accuracy of ignition time prediction for the ODTX. However, the Tarver/McGuire model could not produce the internal temperature profiles observed in the small-scale radial experiments nor could it accurately predict the location of ignition. Those factors are suspected to significantly affect the post-ignition behavior and better models were needed. Brill, et al. noted that the enthalpy change due to the beta-delta crystal phase transition was similar to the assumed endothermic decomposition step in the Tarver/McGuire model. Henson, et al., deduced the kinetics and thermodynamics of the phase transition, providing Dickson, et al. with the information necessary to develop a four-step model that included a two-step nucleation and growth mechanism for the {beta}-{delta} phase transition. Initially, an irreversible scheme was proposed. That model accurately predicted the spatial and temporal cook off behavior of the small-scale radial experiment under slow heating conditions, but did not accurately capture the endothermic phase transition at a faster heating rate. The current version of the four-step model includes reversibility and accurately describes the small-scale radial experiment over a wide range of heating rates. We have observed impact-induced friction ignition of PBX 9501 with grit embedded between the explosive and the lower anvil surface. Observation was done using an infrared camera looking through the sapphire bottom anvil. Time to ignition and temperature-time behavior were recorded. The time to ignition was approximately 500 microseconds and the temperature was approximately 1000 K. The four step reversible kinetic scheme was previously validated for slow cook off scenarios. Our intention was to test the validity for significantly faster hot-spot processes, such as the impact-induced grit friction process studied here. We found the model predicted the ignition time within experimental error. There are caveats to consider when evaluating the agreement. The primary input to the model was friction work over an area computed by a stress analysis. The work rate itself, and the relative velocity of the grit and substrate both have a strong dependence on the initial position of the grit. Any errors in the analysis or the initial grit position would affect the model results. At this time, we do not know the sensitivity to these issues. However, the good agreement does suggest the four step kinetic scheme may have universal applicability for HMX systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Endo, Satoshi; Wong, May
Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in the acoustic substepping portionmore » of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic substeps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic substeps) are eliminated in both of the example stratocumulus cases. This modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less
Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations
Xiao, Heng; Endo, Satoshi; Wong, May; ...
2015-10-29
Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less
A novel method to accurately locate and count large numbers of steps by photobleaching
Tsekouras, Konstantinos; Custer, Thomas C.; Jashnsaz, Hossein; Walter, Nils G.; Pressé, Steve
2016-01-01
Photobleaching event counting is a single-molecule fluorescence technique that is increasingly being used to determine the stoichiometry of protein and RNA complexes composed of many subunits in vivo as well as in vitro. By tagging protein or RNA subunits with fluorophores, activating them, and subsequently observing as the fluorophores photobleach, one obtains information on the number of subunits in a complex. The noise properties in a photobleaching time trace depend on the number of active fluorescent subunits. Thus, as fluorophores stochastically photobleach, noise properties of the time trace change stochastically, and these varying noise properties have created a challenge in identifying photobleaching steps in a time trace. Although photobleaching steps are often detected by eye, this method only works for high individual fluorophore emission signal-to-noise ratios and small numbers of fluorophores. With filtering methods or currently available algorithms, it is possible to reliably identify photobleaching steps for up to 20–30 fluorophores and signal-to-noise ratios down to ∼1. Here we present a new Bayesian method of counting steps in photobleaching time traces that takes into account stochastic noise variation in addition to complications such as overlapping photobleaching events that may arise from fluorophore interactions, as well as on-off blinking. Our method is capable of detecting ≥50 photobleaching steps even for signal-to-noise ratios as low as 0.1, can find up to ≥500 steps for more favorable noise profiles, and is computationally inexpensive. PMID:27654946
NASA Astrophysics Data System (ADS)
Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf
2016-04-01
A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.
Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand
DeLuca, Samuel; Khar, Karen; Meiler, Jens
2015-01-01
RosettaLigand has been successfully used to predict binding poses in protein-small molecule complexes. However, the RosettaLigand docking protocol is comparatively slow in identifying an initial starting pose for the small molecule (ligand) making it unfeasible for use in virtual High Throughput Screening (vHTS). To overcome this limitation, we developed a new sampling approach for placing the ligand in the protein binding site during the initial ‘low-resolution’ docking step. It combines the translational and rotational adjustments to the ligand pose in a single transformation step. The new algorithm is both more accurate and more time-efficient. The docking success rate is improved by 10–15% in a benchmark set of 43 protein/ligand complexes, reducing the number of models that typically need to be generated from 1000 to 150. The average time to generate a model is reduced from 50 seconds to 10 seconds. As a result we observe an effective 30-fold speed increase, making RosettaLigand appropriate for docking medium sized ligand libraries. We demonstrate that this improved initial placement of the ligand is critical for successful prediction of an accurate binding position in the ‘high-resolution’ full atom refinement step. PMID:26207742
Fok, Carlotta Ching Ting; Henry, David; Allen, James
2015-10-01
The stepped wedge design (SWD) and the interrupted time-series design (ITSD) are two alternative research designs that maximize efficiency and statistical power with small samples when contrasted to the operating characteristics of conventional randomized controlled trials (RCT). This paper provides an overview and introduction to previous work with these designs and compares and contrasts them with the dynamic wait-list design (DWLD) and the regression point displacement design (RPDD), which were presented in a previous article (Wyman, Henry, Knoblauch, and Brown, Prevention Science. 2015) in this special section. The SWD and the DWLD are similar in that both are intervention implementation roll-out designs. We discuss similarities and differences between the SWD and DWLD in their historical origin and application, along with differences in the statistical modeling of each design. Next, we describe the main design characteristics of the ITSD, along with some of its strengths and limitations. We provide a critical comparative review of strengths and weaknesses in application of the ITSD, SWD, DWLD, and RPDD as small sample alternatives to application of the RCT, concluding with a discussion of the types of contextual factors that influence selection of an optimal research design by prevention researchers working with small samples.
Ting Fok, Carlotta Ching; Henry, David; Allen, James
2015-01-01
The stepped wedge design (SWD) and the interrupted time-series design (ITSD) are two alternative research designs that maximize efficiency and statistical power with small samples when contrasted to the operating characteristics of conventional randomized controlled trials (RCT). This paper provides an overview and introduction to previous work with these designs, and compares and contrasts them with the dynamic wait-list design (DWLD) and the regression point displacement design (RPDD), which were presented in a previous article (Wyman, Henry, Knoblauch, and Brown, 2015) in this Special Section. The SWD and the DWLD are similar in that both are intervention implementation roll-out designs. We discuss similarities and differences between the SWD and DWLD in their historical origin and application, along with differences in the statistical modeling of each design. Next, we describe the main design characteristics of the ITSD, along with some of its strengths and limitations. We provide a critical comparative review of strengths and weaknesses in application of the ITSD, SWD, DWLD, and RPDD as small samples alternatives to application of the RCT, concluding with a discussion of the types of contextual factors that influence selection of an optimal research design by prevention researchers working with small samples. PMID:26017633
NASA Astrophysics Data System (ADS)
Mebrahitom, A.; Rizuan, D.; Azmir, M.; Nassif, M.
2016-02-01
High speed milling is one of the recent technologies used to produce mould inserts due to the need for high surface finish. It is a faster machining process where it uses a small side step and a small down step combined with very high spindle speed and feed rate. In order to effectively use the HSM capabilities, optimizing the tool path strategies and machining parameters is an important issue. In this paper, six different tool path strategies have been investigated on the surface finish and machining time of a rectangular cavities of ESR Stavax material. CAD/CAM application of CATIA V5 machining module for pocket milling of the cavities was used for process planning.
Karasawa, N; Mitsutake, A; Takano, H
2017-12-01
Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n]polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μs molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.
NASA Astrophysics Data System (ADS)
Karasawa, N.; Mitsutake, A.; Takano, H.
2017-12-01
Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n ] polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μ s molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.
The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI
Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain
2018-01-01
Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg). Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency. PMID:29497372
Wester, T; Borg, H; Naji, H; Stenström, P; Westbacke, G; Lilja, H E
2014-09-01
Serial transverse enteroplasty (STEP) was first described in 2003 as a method for lengthening and tapering of the bowel in short bowel syndrome. The aim of this multicentre study was to review the outcome of a Swedish cohort of children who underwent STEP. All children who had a STEP procedure at one of the four centres of paediatric surgery in Sweden between September 2005 and January 2013 were included in this observational cohort study. Demographic details, and data from the time of STEP and at follow-up were collected from the case records and analysed. Twelve patients had a total of 16 STEP procedures; four children underwent a second STEP. The first STEP was performed at a median age of 5·8 (range 0·9-19·0) months. There was no death at a median follow-up of 37·2 (range 3·0-87·5) months and no child had small bowel transplantation. Seven of the 12 children were weaned from parenteral nutrition at a median of 19·5 (range 2·3-42·9) months after STEP. STEP is a useful procedure for selected patients with short bowel syndrome and seems to facilitate weaning from parenteral nutrition. At mid-term follow-up a majority of the children had achieved enteral autonomy. The study is limited by the small sample size and lack of a control group. © 2014 The Authors. BJS published by John Wiley & Sons Ltd on behalf of BJS Society Ltd.
The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI.
Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain
2018-01-01
Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg) . Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.
NASA Astrophysics Data System (ADS)
Wang, Zhan-zhi; Xiong, Ying
2013-04-01
A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
2017-09-17
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
Self-propelled motion of Au-Si droplets on Si(111) mediated by monoatomic step dissolution
NASA Astrophysics Data System (ADS)
Curiotto, S.; Leroy, F.; Cheynis, F.; Müller, P.
2015-02-01
By Low Energy Electron Microscopy, we show that the spontaneous motion of gold droplets on silicon (111) is chemically driven: the droplets tend to dissolve silicon monoatomic steps to reach the temperature-dependent Au-Si equilibrium stoichiometry. According to the droplet size, the motion details are different. In the first stages of Au deposition small droplets nucleate at steps and move continuously on single terraces. The droplets temporarily pin at each step they meet during their motion. During pinning, the growing droplets become supersaturated in Au. They depin from the steps when a notch nucleate on the upper step. Then the droplets climb up and locally dissolve the Si steps, leaving behind them deep tracks formed by notched steps. Measurements of the dissolution rate and the displacement lengths enable us to describe quantitatively the motion mechanism, also in terms of anisotropy of Si dissolution kinetics. Scaling laws for the droplet position as a function of time are proposed: x ∝ tn with 1/3 < n < 2/3.
A Semi-implicit Method for Resolution of Acoustic Waves in Low Mach Number Flows
NASA Astrophysics Data System (ADS)
Wall, Clifton; Pierce, Charles D.; Moin, Parviz
2002-09-01
A semi-implicit numerical method for time accurate simulation of compressible flow is presented. By extending the low Mach number pressure correction method, a Helmholtz equation for pressure is obtained in the case of compressible flow. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity, resulting in significant efficiency gains. Use of a discretization that is centered in both time and space results in zero artificial damping of acoustic waves. The method is attractive for problems in which Mach numbers are low, and the acoustic waves of most interest are those having low frequency, such as acoustic combustion instabilities. Both of these characteristics suggest the use of time steps larger than those allowable by an acoustic CFL limitation. In some cases it may be desirable to include a small amount of numerical dissipation to eliminate oscillations due to small-wavelength, high-frequency, acoustic modes, which are not of interest; therefore, a provision for doing this in a controlled manner is included in the method. Results of the method for several model problems are presented, and the performance of the method in a large eddy simulation is examined.
Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6
NASA Astrophysics Data System (ADS)
Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.
2017-01-01
This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.
Magnetic properties of mechanically alloyed Mn-Al-C powders
NASA Astrophysics Data System (ADS)
Kohmoto, O.; Kageyama, N.; Kageyama, Y.; Haji, H.; Uchida, M.; Matsushima, Y.
2011-01-01
We have prepared supersaturated-solution Mn-Al-C alloy powders by mechanical alloying using a planetary high-energy mill. The starting materials were pure Mn, Al and C powers. The mechanically-alloyed powders were subjected to a two-step heating. Although starting particles are Al and Mn with additive C, the Al peak disappears with MA time. With increasing MA time, transition from α-Mn to β-Mn does not occur; the α-Mn structure maintains. At 100 h, a single phase of supersaturated-solution α-Mn is obtained. The lattice constant of α-Mn decreases with increasing MA time. From the Scherrer formula, the crystallite size at 500 h is obtained as 200Å, which does not mean amorphous state. By two-step heating, high magnetization (66 emu/g) was obtained from short-time-milled powders (t=10 h). The precursor of the as-milled powder is not a single phase α-Mn but contains small amount of fcc Al. After two-step heating, the powder changes to τ-phase. Although the saturation magnetization increases, the value is less than that by conventional bulk MnAl (88 emu/g). Meanwhile, long-time-milled powder of single α-Mn phase results in low magnetization (5.2 emu/g) after two-step heating.
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan
2001-01-01
Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.
Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm
NASA Astrophysics Data System (ADS)
Mathai, J.; Mujumdar, P.
2017-12-01
A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.
76 FR 10082 - Office of International Trade; State Trade and Export Promotion (STEP) Grant Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-23
... (STEP) Grant Program AGENCY: U.S. Small Business Administration (SBA). ACTION: Notice of grant... Program Announcement No. OIT-STEP-2011- 01 to invite the States, the District of Columbia and the U.S. Territories to apply for a STEP grant to carry out export promotion programs that assist eligible small...
Cappione, Amedeo; Mabuchi, Masaharu; Briggs, David; Nadler, Timothy
2015-04-01
Protein immuno-detection encompasses a broad range of analytical methodologies, including western blotting, flow cytometry, and microscope-based applications. These assays which detect, quantify, and/or localize expression for one or more proteins in complex biological samples, are reliant upon fluorescent or enzyme-tagged target-specific antibodies. While small molecule labeling kits are available with a range of detection moieties, the workflow is hampered by a requirement for multiple dialysis-based buffer exchange steps that are both time-consuming and subject to sample loss. In a previous study, we briefly described an alternative method for small-scale protein labeling with small molecule dyes whereby all phases of the conjugation workflow could be performed in a single centrifugal diafiltration device. Here, we expand on this foundational work addressing functionality of the device at each step in the workflow (sample cleanup, labeling, unbound dye removal, and buffer exchange/concentration) and the implications for optimizing labeling efficiency. When compared to other common buffer exchange methodologies, centrifugal diafiltration offered superior performance as measured by four key parameters (process time, desalting capacity, protein recovery, retain functional integrity). Originally designed for resin-based affinity purification, the device also provides a platform for up-front antibody purification or albumin carrier removal. Most significantly, by exploiting the rapid kinetics of NHS-based labeling reactions, the process of continuous diafiltration minimizes reaction time and long exposure to excess dye, guaranteeing maximal target labeling while limiting the risks associated with over-labeling. Overall, the device offers a simplified workflow with reduced processing time and hands-on requirements, without sacrificing labeling efficiency, final yield, or conjugate performance. Copyright © 2015 Elsevier B.V. All rights reserved.
Flowfield predictions for multiple body launch vehicles
NASA Technical Reports Server (NTRS)
Deese, Jerry E.; Pavish, D. L.; Johnson, Jerry G.; Agarwal, Ramesh K.; Soni, Bharat K.
1992-01-01
A method is developed for simulating inviscid and viscous flow around multicomponent launch vehicles. Grids are generated by the GENIE general-purpose grid-generation code, and the flow solver is a finite-volume Runge-Kutta time-stepping method. Turbulence effects are simulated using Baldwin and Lomax (1978) turbulence model. Calculations are presented for three multibody launch vehicle configurations: one with two small-diameter solid motors, one with nine small-diameter solid motors, and one with three large-diameter solid motors.
Crenshaw, Jeremy R; Rosenblatt, Noah J; Hurt, Christopher P; Grabiner, Mark D
2012-01-03
This study evaluated the discriminant capability of stability measures, trunk kinematics, and step kinematics to classify successful and failed compensatory stepping responses. In addition, the shared variance between stability measures, step kinematics, and trunk kinematics is reported. The stability measures included the anteroposterior distance (d) between the body center of mass and the stepping limb toe, the margin of stability (MOS), as well as time-to-boundary considering velocity (TTB(v)), velocity and acceleration (TTB(a)), and MOS (TTB(MOS)). Kinematic measures included trunk flexion angle and angular velocity, step length, and the time after disturbance onset of recovery step completion. Fourteen young adults stood on a treadmill that delivered surface accelerations necessitating multiple forward compensatory steps. Thirteen subjects fell from an initial disturbance, but recovered from a second, identical disturbance. Trunk flexion velocity at completion of the first recovery step and trunk flexion angle at completion of the second step had the greatest overall classification of all measures (92.3%). TTB(v) and TTB(a) at completion of both steps had the greatest classification accuracy of all stability measures (80.8%). The length of the first recovery step (r ≤ 0.70) and trunk flexion angle at completion of the second recovery step (r ≤ -0.54) had the largest correlations with stability measures. Although TTB(v) and TTB(a) demonstrated somewhat smaller discriminant capabilities than trunk kinematics, the small correlations between these stability measures and trunk kinematics (|r| ≤ 0.52) suggest that they reflect two important, yet different, aspects of a compensatory stepping response. Copyright © 2011 Elsevier Ltd. All rights reserved.
Janiszewski, J; Schneider, P; Hoffmaster, K; Swyden, M; Wells, D; Fouda, H
1997-01-01
The development and application of membrane solid phase extraction (SPE) in 96-well microtiter plate format is described for the automated analysis of drugs in biological fluids. The small bed volume of the membrane allows elution of the analyte in a very small solvent volume, permitting direct HPLC injection and negating the need for the time consuming solvent evaporation step. A programmable liquid handling station (Quadra 96) was modified to automate all SPE steps. To avoid drying of the SPE bed and to enhance the analytical precision a novel protocol for performing the condition, load and wash steps in rapid succession was utilized. A block of 96 samples can now be extracted in 10 min., about 30 times faster than manual solvent extraction or single cartridge SPE methods. This processing speed complements the high-throughput speed of contemporary high performance liquid chromatography mass spectrometry (HPLC/MS) analysis. The quantitative analysis of a test analyte (Ziprasidone) in plasma demonstrates the utility and throughput of membrane SPE in combination with HPLC/MS. The results obtained with the current automated procedure compare favorably with those obtained using solvent and traditional solid phase extraction methods. The method has been used for the analysis of numerous drug prototypes in biological fluids to support drug discovery efforts.
NASA Astrophysics Data System (ADS)
Leier, André; Marquez-Lago, Tatiana T.; Burrage, Kevin
2008-05-01
The delay stochastic simulation algorithm (DSSA) by Barrio et al. [Plos Comput. Biol. 2, 117(E) (2006)] was developed to simulate delayed processes in cell biology in the presence of intrinsic noise, that is, when there are small-to-moderate numbers of certain key molecules present in a chemical reaction system. These delayed processes can faithfully represent complex interactions and mechanisms that imply a number of spatiotemporal processes often not explicitly modeled such as transcription and translation, basic in the modeling of cell signaling pathways. However, for systems with widely varying reaction rate constants or large numbers of molecules, the simulation time steps of both the stochastic simulation algorithm (SSA) and the DSSA can become very small causing considerable computational overheads. In order to overcome the limit of small step sizes, various τ-leap strategies have been suggested for improving computational performance of the SSA. In this paper, we present a binomial τ-DSSA method that extends the τ-leap idea to the delay setting and avoids drawing insufficient numbers of reactions, a common shortcoming of existing binomial τ-leap methods that becomes evident when dealing with complex chemical interactions. The resulting inaccuracies are most evident in the delayed case, even when considering reaction products as potential reactants within the same time step in which they are produced. Moreover, we extend the framework to account for multicellular systems with different degrees of intercellular communication. We apply these ideas to two important genetic regulatory models, namely, the hes1 gene, implicated as a molecular clock, and a Her1/Her 7 model for coupled oscillating cells.
A MULTIPLE GRID APPROACH FOR OPEN CHANNEL FLOWS WITH STRONG SHOCKS. (R825200)
Explicit finite difference schemes are being widely used for modeling open channel flows accompanied with shocks. A characteristic feature of explicit schemes is the small time step, which is limited by the CFL stability condition. To overcome this limitation,...
Bacteria transport simulation using APEX model in the Toenepi watershed, New Zealand
USDA-ARS?s Scientific Manuscript database
The Agricultural Policy/Environmental eXtender (APEX) model is a distributed, continuous, daily-time step small watershed-scale hydrologic and water quality model. In this study, the newly developed fecal-derived bacteria fate and transport subroutine was applied and evalated using APEX model. The e...
"Future Proofing" Faculty: The Struggle To Create Technical Lifelong Learners.
ERIC Educational Resources Information Center
Nay, Fred W.; Malm, Loren D.; Malone, Bobby G.; Oliver, Brad E.; Saunders, Nancy G.; Thompson, Jay C., Jr.
College faculty can avoid investing valuable time and resources in inappropriate technologies by staying in step with technological progress. A "future proof" approach to technology recognizes and welcomes small failures, considering them part of the ongoing process of absorbing technology into the learning process. Future proofing attempts to…
Patterned titania nanostructures produced by electrochemical anodization of titanium sheet
NASA Astrophysics Data System (ADS)
Dong, Junzhe; Ariyanti, Dessy; Gao, Wei; Niu, Zhenjiang; Weil, Emeline
2017-07-01
A two-step anodization method has been used to produce patterned arrays of TiO2 on the surface of Ti sheet. Hexagonal ripples were created on Ti substrate after removing the TiO2 layer produced by first-step anodization. The shallow concaves were served as an ideal position for the subsequent step anodization due to their low electrical resistance, resulting in novel hierarchical nanostructures with small pits inside the original ripples. The mechanism of morphology evolution during patterned anodization was studied through changing the anodizing voltages and duration time. This work provides a new idea for controlling nanostructures and thus tailoring the photocatalytic property and wettability of anodic TiO2.
Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model
NASA Astrophysics Data System (ADS)
Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko
2015-04-01
One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1/24 degree, if in the end you only look at monthly runoff? In this study an attempt is made to link time and space scales in the VIC model, to study the added value of a higher spatial resolution-model for different time steps. In order to do this, four different VIC models were constructed for the Thur basin in North-Eastern Switzerland (1700 km²), a tributary of the Rhine: one lumped model, and three spatially distributed models with a resolution of respectively 1x1 km, 5x5 km, and 10x10 km. All models are run at an hourly time step and aggregated and calibrated for different time steps (hourly, daily, monthly, yearly) using a novel Hierarchical Latin Hypercube Sampling Technique (Vořechovský, 2014). For each time and space scale, several diagnostics like Nash-Sutcliffe efficiency, Kling-Gupta efficiency, all the quantiles of the discharge etc., are calculated in order to compare model performance over different time and space scales for extreme events like floods and droughts. Next to that, the effect of time and space scale on the parameter distribution can be studied. In the end we hope to find a link for optimal time and space scale combinations.
A novel method to accurately locate and count large numbers of steps by photobleaching.
Tsekouras, Konstantinos; Custer, Thomas C; Jashnsaz, Hossein; Walter, Nils G; Pressé, Steve
2016-11-07
Photobleaching event counting is a single-molecule fluorescence technique that is increasingly being used to determine the stoichiometry of protein and RNA complexes composed of many subunits in vivo as well as in vitro. By tagging protein or RNA subunits with fluorophores, activating them, and subsequently observing as the fluorophores photobleach, one obtains information on the number of subunits in a complex. The noise properties in a photobleaching time trace depend on the number of active fluorescent subunits. Thus, as fluorophores stochastically photobleach, noise properties of the time trace change stochastically, and these varying noise properties have created a challenge in identifying photobleaching steps in a time trace. Although photobleaching steps are often detected by eye, this method only works for high individual fluorophore emission signal-to-noise ratios and small numbers of fluorophores. With filtering methods or currently available algorithms, it is possible to reliably identify photobleaching steps for up to 20-30 fluorophores and signal-to-noise ratios down to ∼1. Here we present a new Bayesian method of counting steps in photobleaching time traces that takes into account stochastic noise variation in addition to complications such as overlapping photobleaching events that may arise from fluorophore interactions, as well as on-off blinking. Our method is capable of detecting ≥50 photobleaching steps even for signal-to-noise ratios as low as 0.1, can find up to ≥500 steps for more favorable noise profiles, and is computationally inexpensive. © 2016 Tsekouras et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
A family of compact high order coupled time-space unconditionally stable vertical advection schemes
NASA Astrophysics Data System (ADS)
Lemarié, Florian; Debreu, Laurent
2016-04-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.
Zadravec, Matjaž; Olenšek, Andrej; Matjačić, Zlatko
2017-08-09
Treadmills are used frequently in rehabilitation enabling neurologically impaired subjects to train walking while being assisted by therapists. Numerous studies compared walking on treadmill and overground for unperturbed but not also perturbed conditions. The objective of this study was to compare stepping responses (step length, step width and step time) during overground and treadmill walking in a group of healthy subjects where balance assessment robots applied perturbing pushes to the subject's pelvis in sagittal and frontal planes. During walking in both balance assessment robots (overground and treadmill-based) with applied perturbations the stepping responses of a group of seven healthy subjects were assessed with a motion tracking camera. The results show high degree of similarity of stepping responses between overground and treadmill walking for all perturbation directions. Both devices reproduced similar experimental conditions with relatively small standard deviations in the unperturbed walking as well as in perturbed walking. Based on these results we may conclude that stepping responses following perturbations can be studied on an instrumented treadmill where ground reaction forces can be readily assessed which is not the case during perturbed overground walking.
Long time stability of small-amplitude Breathers in a mixed FPU-KG model
NASA Astrophysics Data System (ADS)
Paleari, Simone; Penati, Tiziano
2016-12-01
In the limit of small couplings in the nearest neighbor interaction, and small total energy, we apply the resonant normal form result of a previous paper of ours to a finite but arbitrarily large mixed Fermi-Pasta-Ulam Klein-Gordon chain, i.e., with both linear and nonlinear terms in both the on-site and interaction potential, with periodic boundary conditions. An existence and orbital stability result for Breathers of such a normal form, which turns out to be a generalized discrete nonlinear Schrödinger model with exponentially decaying all neighbor interactions, is first proved. Exploiting such a result as an intermediate step, a long time stability theorem for the true Breathers of the KG and FPU-KG models, in the anti-continuous limit, is proven.
Wealth redistribution in our small world
NASA Astrophysics Data System (ADS)
Iglesias, J. R.; Gonçalves, S.; Pianegonda, S.; Vega, J. L.; Abramson, G.
2003-09-01
We present a simplified model for the exploitation of resources by interacting agents, in an economy with small-world properties. It is shown that Gaussian distributions of wealth, with some cutoff at a poverty line are present for all values of the parameters, while the frequency of maxima and minima strongly depends on the connectivity and the disorder of the lattice. Finally, we compare a system where the commercial links are frozen with an economy where agents can choose their commercial partners at each time step.
Gait Coordination in Parkinson Disease: Effects of Step Length and Cadence Manipulations
Williams, April J.; Peterson, Daniel S.; Earhart, Gammon M.
2013-01-01
Background Gait impairments are well documented in those with PD. Prior studies suggest that gait impairments may be worse and ongoing in those with PD who demonstrate FOG compared to those with PD who do not. Purpose Our aim was to determine the effects of manipulating step length and cadence individually, and together, on gait coordination in those with PD who experience FOG, those with PD who do not experience FOG, healthy older adults, and healthy young adults. Methods Eleven participants with PD and FOG, 16 with PD and no FOG, 18 healthy older, and 19 healthy young adults walked across a GAITRite walkway under four conditions: Natural, Fast (+50% of preferred cadence), Small (−50% of preferred step length), and SmallFast (+50% cadence and −50% step length). Coordination (i.e. phase coordination index) was measured for each participant during each condition and analyzed using mixed model repeated measure ANOVAs. Results FOG was not elicited. Decreasing step length or decreasing step length and increasing cadence together affected coordination. Small steps combined with fast cadence resulted in poorer coordination in both groups with PD compared to healthy young adults and in those with PD and FOG compared to healthy older adults. Conclusions Coordination deficits can be identified in those with PD by having them walk with small steps combined with fast cadence. Short steps produced at high rate elicit worse coordination than short steps or fast steps alone. PMID:23333356
EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES
Börgers, Christoph; Nectow, Alexander R.
2013-01-01
Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276
NASA Technical Reports Server (NTRS)
Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.
1981-01-01
Small-signal models are derived for the power stage of the voltage step-up (boost) and the current step-up (buck) converters. The modeling covers operation in both the continuous-mmf mode and the discontinuous-mmf mode. The power stage in the regulated current step-up converter on board the Dynamics Explorer Satellite is used as an example to illustrate the procedures in obtaining the small-signal functions characterizing a regulated converter.
Geometric multigrid for an implicit-time immersed boundary method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.
2014-10-12
The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less
Introduction to multifractal detrended fluctuation analysis in matlab.
Ihlen, Espen A F
2012-01-01
Fractal structures are found in biomedical time series from a wide range of physiological phenomena. The multifractal spectrum identifies the deviations in fractal structure within time periods with large and small fluctuations. The present tutorial is an introduction to multifractal detrended fluctuation analysis (MFDFA) that estimates the multifractal spectrum of biomedical time series. The tutorial presents MFDFA step-by-step in an interactive Matlab session. All Matlab tools needed are available in Introduction to MFDFA folder at the website www.ntnu.edu/inm/geri/software. MFDFA are introduced in Matlab code boxes where the reader can employ pieces of, or the entire MFDFA to example time series. After introducing MFDFA, the tutorial discusses the best practice of MFDFA in biomedical signal processing. The main aim of the tutorial is to give the reader a simple self-sustained guide to the implementation of MFDFA and interpretation of the resulting multifractal spectra.
Introduction to Multifractal Detrended Fluctuation Analysis in Matlab
Ihlen, Espen A. F.
2012-01-01
Fractal structures are found in biomedical time series from a wide range of physiological phenomena. The multifractal spectrum identifies the deviations in fractal structure within time periods with large and small fluctuations. The present tutorial is an introduction to multifractal detrended fluctuation analysis (MFDFA) that estimates the multifractal spectrum of biomedical time series. The tutorial presents MFDFA step-by-step in an interactive Matlab session. All Matlab tools needed are available in Introduction to MFDFA folder at the website www.ntnu.edu/inm/geri/software. MFDFA are introduced in Matlab code boxes where the reader can employ pieces of, or the entire MFDFA to example time series. After introducing MFDFA, the tutorial discusses the best practice of MFDFA in biomedical signal processing. The main aim of the tutorial is to give the reader a simple self-sustained guide to the implementation of MFDFA and interpretation of the resulting multifractal spectra. PMID:22675302
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2013 CFR
2013-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2012 CFR
2012-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2010 CFR
2010-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Quantum transport with long-range steps on Watts-Strogatz networks
NASA Astrophysics Data System (ADS)
Wang, Yan; Xu, Xin-Jian
2016-07-01
We study transport dynamics of quantum systems with long-range steps on the Watts-Strogatz network (WSN) which is generated by rewiring links of the regular ring. First, we probe physical systems modeled by the discrete nonlinear schrödinger (DNLS) equation. Using the localized initial condition, we compute the time-averaged occupation probability of the initial site, which is related to the nonlinearity, the long-range steps and rewiring links. Self-trapping transitions occur at large (small) nonlinear parameters for coupling ɛ=-1 (1), as long-range interactions are intensified. The structure disorder induced by random rewiring, however, has dual effects for ɛ=-1 and inhibits the self-trapping behavior for ɛ=1. Second, we investigate continuous-time quantum walks (CTQW) on the regular ring ruled by the discrete linear schrödinger (DLS) equation. It is found that only the presence of the long-range steps does not affect the efficiency of the coherent exciton transport, while only the allowance of random rewiring enhances the partial localization. If both factors are considered simultaneously, localization is greatly strengthened, and the transport becomes worse.
Real time implementation and control validation of the wind energy conversion system
NASA Astrophysics Data System (ADS)
Sattar, Adnan
The purpose of the thesis is to analyze dynamic and transient characteristics of wind energy conversion systems including the stability issues in real time environment using the Real Time Digital Simulator (RTDS). There are different power system simulation tools available in the market. Real time digital simulator (RTDS) is one of the powerful tools among those. RTDS simulator has a Graphical User Interface called RSCAD which contains detail component model library for both power system and control relevant analysis. The hardware is based upon the digital signal processors mounted in the racks. RTDS simulator has the advantage of interfacing the real world signals from the external devices, hence used to test the protection and control system equipments. Dynamic and transient characteristics of the fixed and variable speed wind turbine generating systems (WTGSs) are analyzed, in this thesis. Static Synchronous Compensator (STATCOM) as a flexible ac transmission system (FACTS) device is used to enhance the fault ride through (FRT) capability of the fixed speed wind farm. Two level voltage source converter based STATCOM is modeled in both VSC small time-step and VSC large time-step of RTDS. The simulation results of the RTDS model system are compared with the off-line EMTP software i.e. PSCAD/EMTDC. A new operational scheme for a MW class grid-connected variable speed wind turbine driven permanent magnet synchronous generator (VSWT-PMSG) is developed. VSWT-PMSG uses fully controlled frequency converters for the grid interfacing and thus have the ability to control the real and reactive powers simultaneously. Frequency converters are modeled in the VSC small time-step of the RTDS and three phase realistic grid is adopted with RSCAD simulation through the use of optical analogue digital converter (OADC) card of the RTDS. Steady state and LVRT characteristics are carried out to validate the proposed operational scheme. Simulation results show good agreement with real time simulation software and thus can be used to validate the controllers for the real time operation. Integration of the Battery Energy Storage System (BESS) with wind farm can smoothen its intermittent power fluctuations. The work also focuses on the real time implementation of the Sodium Sulfur (NaS) type BESS. BESS is integrated with the STATCOM. The main advantage of this system is that it can also provide the reactive power support to the system along with the real power exchange from BESS unit. BESS integrated with STATCOM is modeled in the VSC small time-step of the RTDS. The cascaded vector control scheme is used for the control of the STATCOM and suitable control is developed to control the charging/discharging of the NaS type BESS. Results are compared with Laboratory standard power system software PSCAD/EMTDC and the advantages of using RTDS in dynamic and transient characteristics analyses of wind farm are also demonstrated clearly.
USDA-ARS?s Scientific Manuscript database
AnnAGNPS (Annualized Agricultural Non-Point Source Pollution Model) is a system of computer models developed to predict non-point source pollutant loadings within agricultural watersheds. It contains a daily time step distributed parameter continuous simulation surface runoff model designed to assis...
A highly accurate boundary integral equation method for surfactant-laden drops in 3D
NASA Astrophysics Data System (ADS)
Sorgentone, Chiara; Tornberg, Anna-Karin
2018-05-01
The presence of surfactants alters the dynamics of viscous drops immersed in an ambient viscous fluid. This is specifically true at small scales, such as in applications of droplet based microfluidics, where the interface dynamics become of increased importance. At such small scales, viscous forces dominate and inertial effects are often negligible. Considering Stokes flow, a numerical method based on a boundary integral formulation is presented for simulating 3D drops covered by an insoluble surfactant. The method is able to simulate drops with different viscosities and close interactions, automatically controlling the time step size and maintaining high accuracy also when substantial drop deformation appears. To achieve this, the drop surfaces as well as the surfactant concentration on each surface are represented by spherical harmonics expansions. A novel reparameterization method is introduced to ensure a high-quality representation of the drops also under deformation, specialized quadrature methods for singular and nearly singular integrals that appear in the formulation are evoked and the adaptive time stepping scheme for the coupled drop and surfactant evolution is designed with a preconditioned implicit treatment of the surfactant diffusion.
Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick
2017-03-09
Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.
NASA Astrophysics Data System (ADS)
Koskelo, Antti I.; Fisher, Thomas R.; Utz, Ryan M.; Jordan, Thomas E.
2012-07-01
SummaryBaseflow separation methods are often impractical, require expensive materials and time-consuming methods, and/or are not designed for individual events in small watersheds. To provide a simple baseflow separation method for small watersheds, we describe a new precipitation-based technique known as the Sliding Average with Rain Record (SARR). The SARR uses rainfall data to justify each separation of the hydrograph. SARR has several advantages such as: it shows better consistency with the precipitation and discharge records, it is easier and more practical to implement, and it includes a method of event identification based on precipitation and quickflow response. SARR was derived from the United Kingdom Institute of Hydrology (UKIH) method with several key modifications to adapt it for small watersheds (<50 km2). We tested SARR on watersheds in the Choptank Basin on the Delmarva Peninsula (US Mid-Atlantic region) and compared the results with the UKIH method at the annual scale and the hydrochemical method at the individual event scale. Annually, SARR calculated a baseflow index that was ˜10% higher than the UKIH method due to the finer time step of SARR (1 d) compared to UKIH (5 d). At the watershed scale, hydric soils were an important driver of the annual baseflow index likely due to increased groundwater retention in hydric areas. At the event scale, SARR calculated less baseflow than the hydrochemical method, again because of the differences in time step (hourly for hydrochemical) and different definitions of baseflow. Both SARR and hydrochemical baseflow increased with event size, suggesting that baseflow contributions are more important during larger storms. To make SARR easy to implement, we have written a MatLab program to automate the calculations which requires only daily rainfall and daily flow data as inputs.
NASA Astrophysics Data System (ADS)
Kunimura, Shinsuke; Ohmori, Hitoshi
We present a rapid process for producing flat and smooth surfaces. In this technical note, a fabrication result of a carbon mirror is shown. Electrolytic in-process dressing (ELID) grinding with a metal bonded abrasive wheel, then a metal-resin bonded abrasive wheel, followed by a conductive rubber bonded abrasive wheel, and finally magnetorheological finishing (MRF) were performed as the first, second, third, and final steps, respectively in this process. Flatness over the whole surface was improved by performing the first and second steps. After the third step, peak to valley (PV) and root mean square (rms) values in an area of 0.72 x 0.54 mm2 on the surface were improved. These values were further improved after the final step, and a PV value of 10 nm and an rms value of 1 nm were obtained. Form errors and small surface irregularities such as surface waviness and micro roughness were efficiently reduced by performing ELID grinding using the above three kinds of abrasive wheels because of the high removal rate of ELID grinding, and residual small irregularities were reduced by short time MRF. This process makes it possible to produce flat and smooth surfaces in several hours.
Tang, Guoping; Yuan, Fengming; Bisht, Gautam; ...
2016-01-01
Reactive transport codes (e.g., PFLOTRAN) are increasingly used to improve the representation of biogeochemical processes in terrestrial ecosystem models (e.g., the Community Land Model, CLM). As CLM and PFLOTRAN use explicit and implicit time stepping, implementation of CLM biogeochemical reactions in PFLOTRAN can result in negative concentration, which is not physical and can cause numerical instability and errors. The objective of this work is to address the nonnegativity challenge to obtain accurate, efficient, and robust solutions. We illustrate the implementation of a reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant nitrogen uptake reactions and test the implementation atmore » arctic, temperate, and tropical sites. We examine use of scaling back the update during each iteration (SU), log transformation (LT), and downregulating the reaction rate to account for reactant availability limitation to enforce nonnegativity. Both SU and LT guarantee nonnegativity but with implications. When a very small scaling factor occurs due to either consumption or numerical overshoot, and the iterations are deemed converged because of too small an update, SU can introduce excessive numerical error. LT involves multiplication of the Jacobian matrix by the concentration vector, which increases the condition number, decreases the time step size, and increases the computational cost. Neither SU nor SE prevents zero concentration. When the concentration is close to machine precision or 0, a small positive update stops all reactions for SU, and LT can fail due to a singular Jacobian matrix. The consumption rate has to be downregulated such that the solution to the mathematical representation is positive. A first-order rate downregulates consumption and is nonnegative, and adding a residual concentration makes it positive. For zero-order rate or when the reaction rate is not a function of a reactant, representing the availability limitation of each reactant with a Monod substrate limiting function provides a smooth transition between a zero-order rate when the reactant is abundant and first-order rate when the reactant becomes limiting. When the half saturation is small, marching through the transition may require small time step sizes to resolve the sharp change within a small range of concentration values. Our results from simple tests and CLM-PFLOTRAN simulations caution against use of SU and indicate that accurate, stable, and relatively efficient solutions can be achieved with LT and downregulation with Monod substrate limiting function and residual concentration.« less
Investigation of the Dynamics of Low-Tension Cables
1992-06-01
chapter 3. An implicit time domain routine is nec- essary as the high propagation speed of elastic waves would require prohibitively small time-step...singularities by ensuring smooth curvature. However, sustained boundary layers are found to develop, demonstrating the importance of the underlying physical...chain and elastic chain, EA* = 4.0 x 103 ............... 124 3.10 Mode shape for tension variation due to elastic waves , using EA* - 4.0 x 103.125 6.11
Analysis of the tsunami generated by the MW 7.8 1906 San Francisco earthquake
Geist, E.L.; Zoback, M.L.
1999-01-01
We examine possible sources of a small tsunami produced by the 1906 San Francisco earthquake, recorded at a single tide gauge station situated at the opening to San Francisco Bay. Coseismic vertical displacement fields were calculated using elastic dislocation theory for geodetically constrained horizontal slip along a variety of offshore fault geometries. Propagation of the ensuing tsunami was calculated using a shallow-water hydrodynamic model that takes into account the effects of bottom friction. The observed amplitude and negative pulse of the first arrival are shown to be inconsistent with small vertical displacements (~4-6 cm) arising from pure horizontal slip along a continuous right bend in the San Andreas fault offshore. The primary source region of the tsunami was most likely a recently recognized 3 km right step in the San Andreas fault that is also the probable epicentral region for the 1906 earthquake. Tsunami models that include the 3 km right step with pure horizontal slip match the arrival time of the tsunami, but underestimate the amplitude of the negative first-arrival pulse. Both the amplitude and time of the first arrival are adequately matched by using a rupture geometry similar to that defined for the 1995 MW (moment magnitude) 6.9 Kobe earthquake: i.e., fault segments dipping toward each other within the stepover region (83??dip, intersecting at 10 km depth) and a small component of slip in the dip direction (rake=-172??). Analysis of the tsunami provides confirming evidence that the 1906 San Francisco earthquake initiated at a right step in a right-lateral fault and propagated bilaterally, suggesting a rupture initiation mechanism similar to that for the 1995 Kobe earthquake.
STEP UP for American Small Businesses Act
Sen. Cantwell, Maria [D-WA
2014-12-09
Senate - 12/09/2014 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Angle Neutron Scattering Observation of Chain Retraction after a Large Step Deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, A.; Heinrich, M.; Pyckhout-Hintzen, W.
The process of retraction in entangled linear chains after a fast nonlinear stretch was detected from time-resolved but quenched small angle neutron scattering (SANS) experiments on long, well-entangled polyisoprene chains. The statically obtained SANS data cover the relevant time regime for retraction, and they provide a direct, microscopic verification of this nonlinear process as predicted by the tube model. Clear, quantitative agreement is found with recent theories of contour length fluctuations and convective constraint release, using parameters obtained mainly from linear rheology. The theory captures the full range of scattering vectors once the crossover to fluctuations on length scales belowmore » the tube diameter is accounted for.« less
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Kang, Xinchen; Zhang, Jianling; Shang, Wenting; Wu, Tianbin; Zhang, Peng; Han, Buxing; Wu, Zhonghua; Mo, Guang; Xing, Xueqing
2014-03-12
Stable porous ionic liquid-water gel induced by inorganic salts was created for the first time. The porous gel was used to develop a one-step method to synthesize supported metal nanocatalysts. Au/SiO2, Ru/SiO2, Pd/Cu(2-pymo)2 metal-organic framework (Cu-MOF), and Au/polyacrylamide (PAM) were synthesized, in which the supports had hierarchical meso- and macropores, the size of the metal nanocatalysts could be very small (<1 nm), and the size distribution was very narrow even when the metal loading amount was as high as 8 wt %. The catalysts were extremely active, selective, and stable for oxidative esterification of benzyl alcohol to methyl benzoate, benzene hydrogenation to cyclohexane, and oxidation of benzyl alcohol to benzaldehyde because they combined the advantages of the nanocatalysts of small size and hierarchical porosity of the supports. In addition, this method is very simple.
A MODFLOW Infiltration Device Package for Simulating Storm Water Infiltration.
Jeppesen, Jan; Christensen, Steen
2015-01-01
This article describes a MODFLOW Infiltration Device (INFD) Package that can simulate infiltration devices and their two-way interaction with groundwater. The INFD Package relies on a water balance including inflow of storm water, leakage-like seepage through the device faces, overflow, and change in storage. The water balance for the device can be simulated in multiple INFD time steps within a single MODFLOW time step, and infiltration from the device can be routed through the unsaturated zone to the groundwater table. A benchmark test shows that the INFD Package's analytical solution for stage computes exact results for transient behavior. To achieve similar accuracy by the numerical solution of the MODFLOW Surface-Water Routing (SWR1) Process requires many small time steps. Furthermore, the INFD Package includes an improved representation of flow through the INFD sides that results in lower infiltration rates than simulated by SWR1. The INFD Package is also demonstrated in a transient simulation of a hypothetical catchment where two devices interact differently with groundwater. This simulation demonstrates that device and groundwater interaction depends on the thickness of the unsaturated zone because a shallow groundwater table (a likely result from storm water infiltration itself) may occupy retention volume, whereas a thick unsaturated zone may cause a phase shift and a change of amplitude in groundwater table response to a change of infiltration. We thus find that the INFD Package accommodates the simulation of infiltration devices and groundwater in an integrated manner on small as well as large spatial and temporal scales. © 2014, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.
2014-04-01
When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision-making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma of how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, another way out is developed: to face human agency squarely, and direct the modeling approach to the agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics, as these provide the context within which human agency is acted out.
NASA Astrophysics Data System (ADS)
Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.
2013-11-01
When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, the other way out is developed: to face human agency squarely, and direct the modeling approach to the human agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries, requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics as these provide the context within which human agency, is acted out.
NASA Astrophysics Data System (ADS)
Lemarié, F.; Debreu, L.
2016-02-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.
Brakenridge, C L; Fjeldsoe, B S; Young, D C; Winkler, E A H; Dunstan, D W; Straker, L M; Healy, G N
2016-11-04
Office workers engage in high levels of sitting time. Effective, context-specific, and scalable strategies are needed to support widespread sitting reduction. This study aimed to evaluate organisational-support strategies alone or in combination with an activity tracker to reduce sitting in office workers. From one organisation, 153 desk-based office workers were cluster-randomised (by team) to organisational support only (e.g., manager support, emails; 'Group ORG', 9 teams, 87 participants), or organisational support plus LUMOback activity tracker ('Group ORG + Tracker', 9 teams, 66 participants). The waist-worn tracker provided real-time feedback and prompts on sitting and posture. ActivPAL3 monitors were used to ascertain primary outcomes (sitting time during work- and overall hours) and other activity outcomes: prolonged sitting time (≥30 min bouts), time between sitting bouts, standing time, stepping time, and number of steps. Health and work outcomes were assessed by questionnaire. Changes within each group (three- and 12 months) and differences between groups were analysed by linear mixed models. Missing data were multiply imputed. At baseline, participants (46 % women, 23-58 years) spent (mean ± SD) 74.3 ± 9.7 % of their workday sitting, 17.5 ± 8.3 % standing and 8.1 ± 2.7 % stepping. Significant (p < 0.05) reductions in sitting time (both work and overall) were observed within both groups, but only at 12 months. For secondary activity outcomes, Group ORG significantly improved in work prolonged sitting, time between sitting bouts and standing time, and overall prolonged sitting time (12 months), and in overall standing time (three- and 12 months); while Group ORG + Tracker, significantly improved in work prolonged sitting, standing, stepping and overall standing time (12 months). Adjusted for confounders, the only significant between-group differences were a greater stepping time and step count for Group ORG + Tracker relative to Group ORG (+20.6 min/16 h day, 95 % CI: 3.1, 38.1, p = 0.021; +846.5steps/16 h day, 95 % CI: 67.8, 1625.2, p = 0.033) at 12 months. Observed changes in health and work outcomes were small and not statistically significant. Organisational-support strategies with or without an activity tracker resulted in improvements in sitting, prolonged sitting and standing; adding a tracker enhanced stepping changes. Improvements were most evident at 12 months, suggesting the organisational-support strategies may have taken time to embed within the organisation. Australian New Zealand Clinical Trial Registry: ACTRN12614000252617 . Registered 10 March 2014.
Numerical solution methods for viscoelastic orthotropic materials
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1988-01-01
Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zachary M. Prince; Jean C. Ragusa; Yaqi Wang
Because of the recent interest in reactor transient modeling and the restart of the Transient Reactor (TREAT) Facility, there has been a need for more efficient, robust methods in computation frameworks. This is the impetus of implementing the Improved Quasi-Static method (IQS) in the RATTLESNAKE/MOOSE framework. IQS has implemented with CFEM diffusion by factorizing flux into time-dependent amplitude and spacial- and weakly time-dependent shape. The shape evaluation is very similar to a flux diffusion solve and is computed at large (macro) time steps. While the amplitude evaluation is a PRKE solve where the parameters are dependent on the shape andmore » is computed at small (micro) time steps. IQS has been tested with a custom one-dimensional example and the TWIGL ramp benchmark. These examples prove it to be a viable and effective method for highly transient cases. More complex cases are intended to be applied to further test the method and its implementation.« less
Effectiveness of Mathetics in Achievement in Chemistry at Higher Secondary Level
ERIC Educational Resources Information Center
Elias, Jijish
2009-01-01
The application of psychology and technology are used in the learning process while we use programmed learning modules. In programmed learning we use the small steps of learning resulting in mastery. The modules helps to respond the learners activity and to give immediate feedback. The learners will get their own timing to go through the lessons…
Socioeconomic Indicators for Small Towns. Small Town Strategy.
ERIC Educational Resources Information Center
Oregon State Univ., Corvallis. Cooperative Extension Service.
Prepared to help small towns assess community population and economic trends, this publication provides a step-by-step guide for establishing an on-going local data collection system, which is based on four local indicators and will provide accurate, up-to-date estimates of population, family income, and gross sales within a town's trade area. The…
Neural correlates of gait variability in people with multiple sclerosis with fall history.
Kalron, Alon; Allali, Gilles; Achiron, Anat
2018-05-28
Investigate the association between step time variability and related brain structures in accordance with fall status in people with multiple sclerosis (PwMS). The study included 225 PwMS. A whole-brain MRI was performed by a high-resolution 3.0-Telsa MR scanner in addition to volumetric analysis based on 3D T1-weighted images using the FreeSurfer image analysis suite. Step time variability was measured by an electronic walkway. Participants were defined as "fallers" (at least two falls during the previous year) and "non-fallers". One hundred and five PwMS were defined as fallers and had a greater step time variability compared to non-fallers (5.6% (S.D.=3.4) vs. 3.4% (S.D.=1.5); p=0.001). MS fallers exhibited a reduced volume in the left caudate and both cerebellum hemispheres compared to non-fallers. By using a linear regression analysis no association was found between gait variability and related brain structures in the total cohort and non-fallers group. However, the analysis found an association between the left hippocampus and left putamen volumes with step time variability in the faller group; p=0.031, 0.048, respectively, controlling for total cranial volume, walking speed, disability, age and gender. Nevertheless, according to the hierarchical regression model, the contribution of these brain measures to predict gait variability was relatively small compared to walking speed. An association between low left hippocampal, putamen volumes and step time variability was found in PwMS with a history of falls, suggesting brain structural characteristics may be related to falls and increased gait variability in PwMS. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Technical Reports Server (NTRS)
Majda, G.
1985-01-01
A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.
NASA Astrophysics Data System (ADS)
Salatino, Maria
2017-06-01
In the current submm and mm cosmology experiments the focal planes are populated by kilopixel transition edge sensors (TESes). Varying incoming power load requires frequent rebiasing of the TESes through standard current-voltage (IV) acquisition. The time required to perform IVs on such large arrays and the resulting transient heating of the bath reduces the sky observation time. We explore a bias step method that significantly reduces the time required for the rebiasing process. This exploits the detectors' responses to the injection of a small square wave signal on top of the dc bias current and knowledge of the shape of the detector transition R(T,I). This method has been tested on two detector arrays of the Atacama Cosmology Telescope (ACT). In this paper, we focus on the first step of the method, the estimate of the TES %Rn.
Chomiak, Taylor; Watts, Alexander; Meyer, Nicole; Pereira, Fernando V; Hu, Bin
2017-02-01
Deficits in motor movement automaticity in Parkinson's disease (PD), especially during multitasking, are early and consistent hallmarks of cognitive function decline, which increases fall risk and reduces quality of life. This study aimed to test the feasibility and potential efficacy of a wearable sensor-enabled technological platform designed for an in-home music-contingent stepping-in-place (SIP) training program to improve step automaticity during dual-tasking (DT). This was a 4-week prospective intervention pilot study. The intervention uses a sensor system and algorithm that runs off the iPod Touch which calculates step height (SH) in real-time. These measurements were then used to trigger auditory (treatment group, music; control group, radio podcast) playback in real-time through wireless headphones upon maintenance of repeated large amplitude stepping. With small steps or shuffling, auditory playback stops, thus allowing participants to use anticipatory motor control to regain positive feedback. Eleven participants were recruited from an ongoing trial (Trial Number: ISRCTN06023392). Fear of falling (FES-I), general cognitive functioning (MoCA), self-reported freezing of gait (FOG-Q), and DT step automaticity were evaluated. While we found no significant effect of training on FES-I, MoCA, or FOG-Q, we did observe a significant group (music vs podcast) by training interaction in DT step automaticity (P<0.01). Wearable device technology can be used to enable musically-contingent SIP training to increase motor automaticity for people living with PD. The training approach described here can be implemented at home to meet the growing demand for self-management of symptoms by patients.
Local symmetries and order-disorder transitions in small macroscopic Wigner islands.
Coupier, Gwennou; Guthmann, Claudine; Noat, Yves; Jean, Michel Saint
2005-04-01
The influence of local order on the disordering scenario of small Wigner islands is discussed. A first disordering step is put in evidence by the time correlation functions and is linked to individual excitations resulting in configuration transitions, which are very sensitive to the local symmetries. This is followed by two other transitions, corresponding to orthoradial and radial diffusion, for which both individual and collective excitations play a significant role. Finally, we show that, contrary to large systems, the focus that is commonly made on collective excitations for such small systems through the Lindemann criterion has to be made carefully in order to clearly identify the relative contributions in the whole disordering process.
Crystallization Physics in Biomacromolecular Systems
NASA Technical Reports Server (NTRS)
Chernov, A. A.
2003-01-01
The crystals are built of molecules of protein, nucleic acid and their complexes, like viruses, approx. 5x10(exp 3)+ 3x10(exp 6) Da in weight and 2 + 20 nm in effective diameter. This size strongly exceeds action range of molecular forces and makes a big difference with inorganic crystals. Intermolecular contacts form patches on the biomacromolecular surface. Each patch may occupy only a small percent of the whole surface and vary from polymorph to polymorph of the same protein. Thus, under different conditions (pH, solution chemistry, temperature, any area on the macromolecular surface may form a contact. The crystal Young moduli, E approx. equals 0.1 + 0.5 GPa are more than 10 times lower than that of inorganics and the biomolecules themselves. Water within biocrystals (30-70%) is unable to flow unless typical deformation time is longer than approx. 10(exp -5)s. This explains the discrepancy between light scattering and static measurements of E. Nucleation and Growth requires typically concentrations exceeding the equilibrium ones up to 100 times - because of the new size scale results in 10 - 10(exp 3) times lower kinetic coefficients than that needed for inorganic solution growth. All phenomena observed in the latter occur with protein crystallization and are even better studied by AFM. Crystals are typically facetted. Among unexpected findings of general significance are - net molecular exchange flux at kinks is much lower than that expected from supersaturation, steps with low (< approx. 10(exp -2)) kink density at steps follow Gibbs-Thomson law only at very low supersaturations, step segment growth rate may be independent of step energy. Crystal perfection is a must of biocrystallization to achieve the major goal to find 3-D atomic structure of biomacromolecules by x-ray diffraction. Poor diffraction resolution (> 3Angstrom) makes crystallization a bottleneck for structural biology. All defects typical of small molecule crystals are found in biocrystals, but the defects responsible for poor resolution are not identified. Conformational changes are one of them. Biocrystallization in microgravity reportedly results in 20% cases of better crystals. The mechanism of how lack of convection can do this is still not clear. Lower supersaturation, self-purification &om preferentially trapped homologous impurities and step bunching are viable hypotheses.
NASA Astrophysics Data System (ADS)
Nguyen, L. T.; Modrak, R. T.; Saenger, E. H.; Tromp, J.
2017-12-01
Reverse-time migration (RTM) can reconstruct reflectors and scatterers by cross-correlating the source wavefield and the receiver wavefield given a known velocity model of the background. In nondestructive testing, however, the engineered structure under inspection is often composed of layers of various materials and the background material has been degraded non-uniformly because of environmental or operational effects. On the other hand, ultrasonic waveform tomography based on the principles of full-waveform inversion (FWI) has succeeded in detecting anomalous features in engineered structures. But the building of the wave velocity model of the comprehensive small-size and high-contrast defect(s) is difficult because it requires computationally expensive high-frequency numerical wave simulations and an accurate understanding of large-scale background variations of the engineered structure.To reduce computational cost and improve detection of small defects, a useful approach is to divide the waveform tomography procedure into two steps: first, a low-frequency model-building step aimed at recovering background structure using FWI, and second, a high-frequency imaging step targeting defects using RTM. Through synthetic test cases, we show that the two-step procedure appears more promising in most cases than a single-step inversion. In particular, we find that the new workflow succeeds in the challenging scenario where the defect lies along preexisting layer interface in a composite bridge deck and in related experiments involving noisy data or inaccurate source parameters. The results reveal the potential of the new wavefield imaging method and encourage further developments in data processing, enhancing computation power, and optimizing the imaging workflow itself so that the procedure can efficiently be applied to geometrically complex 3D solids and waveguides. Lastly, owing to the scale invariance of the elastic wave equation, this imaging procedure can be transferred to applications in regional scales as well.
ctsGE-clustering subgroups of expression data.
Sharabi-Schwager, Michal; Or, Etti; Ophir, Ron
2017-07-01
A pre-requisite to clustering noisy data, such as gene-expression data, is the filtering step. As an alternative to this step, the ctsGE R-package applies a sorting step in which all of the data are divided into small groups. The groups are divided according to how the time points are related to the time-series median. Then clustering is performed separately on each group. Thus, the clustering is done in two steps. First, an expression index (i.e. a sequence of 1, -1 and 0) is defined and genes with the same index are grouped together, and then each group of genes is clustered by k-means to create subgroups. The ctsGE package also provides an interactive tool to visualize and explore the gene-expression patterns and their subclusters. ctsGE proposes a way of organizing and exploring expression data without eliminating valuable information. Freely available as part of the Bioconductor project at https://bioconductor.org/packages/ctsGE/ . ron@agri.gov.il. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
One Step Back, Two Steps Forward: An Analytical Framework for Airpower in Small Wars
2006-06-01
Counterinsurgency Business .” Small Wars and Insurgencies 5, no. 3 (Winter 1994). 81 Watman, K. and Wilkening, D. U.S. Regional Deterrence Strategies . Santa...optimal for waging wars at the sub-state level . Small wars are conflicts where the political and diplomatic context, and not the military...use of airpower for waging war at this level . 15. NUMBER OF PAGES 99 14. SUBJECT TERMS Airpower, Small War, Leites and Wolf, insurgency
Kang, Minji; Hwang, Hansu; Park, Won-Tae; Khim, Dongyoon; Yeo, Jun-Seok; Kim, Yunseul; Kim, Yeon-Ju; Noh, Yong-Young; Kim, Dong-Yu
2017-01-25
We report on the fabrication of an organic thin-film semiconductor formed using a blend solution of soluble ambipolar small molecules and an insulating polymer binder that exhibits vertical phase separation and uniform film formation. The semiconductor thin films are produced in a single step from a mixture containing a small molecular semiconductor, namely, quinoidal biselenophene (QBS), and a binder polymer, namely, poly(2-vinylnaphthalene) (PVN). Organic field-effect transistors (OFETs) based on QBS/PVN blend semiconductor are then assembled using top-gate/bottom-contact device configuration, which achieve almost four times higher mobility than the neat QBS semiconductor. Depth profile via secondary ion mass spectrometry and atomic force microscopy images indicate that the QBS domains in the films made from the blend are evenly distributed with a smooth morphology at the bottom of the PVN layer. Bias stress test and variable-temperature measurements on QBS-based OFETs reveal that the QBS/PVN blend semiconductor remarkably reduces the number of trap sites at the gate dielectric/semiconductor interface and the activation energy in the transistor channel. This work provides a one-step solution processing technique, which makes use of soluble ambipolar small molecules to form a thin-film semiconductor for application in high-performance OFETs.
Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping
2016-02-11
Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.
Effective image differencing with convolutional neural networks for real-time transient hunting
NASA Astrophysics Data System (ADS)
Sedaghat, Nima; Mahabal, Ashish
2018-06-01
Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.
Granger, Catherine L; Denehy, Linda; McDonald, Christine F; Irving, Louis; Clark, Ross A
2014-11-01
Increasingly physical activity (PA) is being recognized as an important outcome in non-small cell lung cancer (NSCLC). We investigated PA using novel global positioning system (GPS) tracking individuals with NSCLC and a group of similar-aged healthy individuals. A prospective cross-sectional multicenter study. Fifty individuals with NSCLC from 3 Australian tertiary hospitals and 35 similar-aged healthy individuals without cancer were included. Individuals with NSCLC were assessed pretreatment. Primary measures were triaxial accelerometery (steps/day) and GPS tracking (outdoor PA behavior). Secondary measures were questionnaires assessing depression, motivation to exercise, and environmental barriers to PA. Between-group comparisons were analyzed using analysis of covariance. Individuals with NSCLC engaged in significantly less PA than similar-aged healthy individuals (mean difference 2363 steps/day, P = .007) and had higher levels of depression (P = .027) and lower motivation to exercise (P = .001). Daily outdoor walking time (P = .874) and distance travelled away from home (P = .883) were not different between groups. Individuals with NSCLC spent less time outdoors in their local neighborhood area (P < .001). A greater number of steps per day was seen in patients who were less depressed (r = .39) or had better access to nonresidential destinations such as shopping centers (r = .25). Global positioning system tracking appears to be a feasible methodology for adult cancer patients and holds promise for use in future studies investigating PA and or lifestyle behaviors. © The Author(s) 2014.
An algorithm for fast elastic wave simulation using a vectorized finite difference operator
NASA Astrophysics Data System (ADS)
Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna
2018-07-01
Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.
Steps to Starting a Small Business. Student Notebook.
ERIC Educational Resources Information Center
Wisconsin Univ., Madison. Vocational Studies Center.
This student notebook provides student materials for a program of study made up of a series of community-based activities potentially leading to the start-up of one's own business, while at the same time providing a better understanding of the American economic system of free enterprise. It begins with a glossary. For each of 15 units (1 more unit…
Infantry Small-Unit Mountain Operations
2011-02-01
expended to traverse it. Unique sustainment solutions. Sustainment in a mountain environment is a challenging and time-consuming process . Terrain...a particular environment during the intelligence preparation of the battlefield (IPB) process and provide the analysis to the company. The IPB...consists of a four-step process that includes— Defining the operational environment. Describing environmental effects on operations. Evaluating the
Lewis, L K; Rowlands, A V; Gardiner, P A; Standage, M; English, C; Olds, T
2016-03-01
This study aimed to evaluate the preliminary effectiveness and feasibility of a theory-informed program to reduce sitting time in older adults. Pre-experimental (pre-post) study. Thirty non-working adult (≥ 60 years) participants attended a one hour face-to-face intervention session and were guided through: a review of their sitting time; normative feedback on sitting time; and setting goals to reduce total sitting time and bouts of prolonged sitting. Participants chose six goals and integrated one per week incrementally for six weeks. Participants received weekly phone calls. Sitting time and bouts of prolonged sitting (≥ 30 min) were measured objectively for seven days (activPAL3c inclinometer) pre- and post-intervention. During these periods, a 24-h time recall instrument was administered by computer-assisted telephone interview. Participants completed a post-intervention project evaluation questionnaire. Paired t tests with sequential Bonferroni corrections and Cohen's d effect sizes were calculated for all outcomes. Twenty-seven participants completed the assessments (71.7 ± 6.5 years). Post-intervention, objectively-measured total sitting time was significantly reduced by 51.5 min per day (p=0.006; d=-0.58) and number of bouts of prolonged sitting by 0.8 per day (p=0.002; d=-0.70). Objectively-measured standing increased by 39 min per day (p=0.006; d=0.58). Participants self-reported spending 96 min less per day sitting (p<0.001; d=-0.77) and 32 min less per day watching television (p=0.005; d=-0.59). Participants were highly satisfied with the program. The 'Small Steps' program is a feasible and promising avenue for behavioral modification to reduce sitting time in older adults. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mission Assignment Model and Simulation Tool for Different Types of Unmanned Aerial Vehicles
2008-09-01
TABLE OF ABBREVIATIONS AND ACRONYMS AAA Anti Aircraft Artillery ATO Air Tasking Order BDA Battle Damage Assessment DES Discrete Event Simulation...clock is advanced in small, fixed time steps. Since the value of simulated time is important in DES , an internal variable, called as simulation clock...VEHICLES Yücel Alver Captain, Turkish Air Force B.S., Turkish Air Force Academy, 2000 Murat Özdoğan 1st Lieutenant, Turkish Air Force B.S., Turkish
Performance Limiting Flow Processes in High-State Loading High-Mach Number Compressors
2008-03-13
the Doctoral Thesis Committee of the doctoral student. 3 3.0 Technical Background A strong incentive exists to reduce airfoil count in aircraft engine ...Advanced Turbine Engine ). A basic constraint on blade reduction is seen from the Euler turbine equation, which shows that, although a design can be carried...on the vane to rotor blade ratio of 8:11). Within the MSU Turbo code, specifying a small number of time steps requires more iteration at each time
Falling coupled oscillators and trigonometric sums
NASA Astrophysics Data System (ADS)
Holcombe, S. R.
2018-02-01
A method for evaluating finite trigonometric summations is applied to a system of N coupled oscillators under acceleration. Initial motion of the nth particle is shown to be of the order T^{2{n}+2} for small time T, and the end particle in the continuum limit is shown to initially remain stationary for the time it takes a wavefront to reach it. The average velocities of particles at the ends of the system are shown to take discrete values in a step-like manner.
NASA Technical Reports Server (NTRS)
Getty, Stephanie A.; Brinckerhoff, William B.; Li, Xiang; Elsila, Jamie; Cornish, Timothy; Ecelberger, Scott; Wu, Qinghao; Zare, Richard
2014-01-01
Two-step laser desorption mass spectrometry is a well suited technique to the analysis of high priority classes of organics, such as polycyclic aromatic hydrocarbons, present in complex samples. The use of decoupled desorption and ionization laser pulses allows for sensitive and selective detection of structurally intact organic species. We have recently demonstrated the implementation of this advancement in laser mass spectrometry in a compact, flight-compatible instrument that could feasibly be the centerpiece of an analytical science payload as part of a future spaceflight mission to a small body or icy moon.
Markopoulos, Georgios
2012-01-01
Summary This review describes the preparation, structural properties and the use of bisallenes in organic synthesis for the first time. All classes of compounds containing at least two allene moieties are considered, starting from simple conjugated bisallenes and ending with allenes in which the two cumulenic units are connected by complex polycyclic ring systems, heteroatoms and/or heteroatom-containing tethers. Preparatively the bisallenes are especially useful in isomerization and cycloaddition reactions of all kinds leading to the respective target molecules with high atom economy and often in high yield. Bisallenes are hence substrates for generating molecular complexity in a small number of steps (high step economy). PMID:23209534
NASA Technical Reports Server (NTRS)
Debussche, A.; Dubois, T.; Temam, R.
1993-01-01
Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.
Creation of a small high-throughput screening facility.
Flak, Tod
2009-01-01
The creation of a high-throughput screening facility within an organization is a difficult task, requiring a substantial investment of time, money, and organizational effort. Major issues to consider include the selection of equipment, the establishment of data analysis methodologies, and the formation of a group having the necessary competencies. If done properly, it is possible to build a screening system in incremental steps, adding new pieces of equipment and data analysis modules as the need grows. Based upon our experience with the creation of a small screening service, we present some guidelines to consider in planning a screening facility.
Habchi, Johnny; Chia, Sean; Limbocker, Ryan; Mannini, Benedetta; Ahn, Minkoo; Perni, Michele; Hansson, Oskar; Arosio, Paolo; Kumita, Janet R; Challa, Pavan Kumar; Cohen, Samuel I A; Linse, Sara; Dobson, Christopher M; Knowles, Tuomas P J; Vendruscolo, Michele
2017-01-10
The aggregation of the 42-residue form of the amyloid-β peptide (Aβ42) is a pivotal event in Alzheimer's disease (AD). The use of chemical kinetics has recently enabled highly accurate quantifications of the effects of small molecules on specific microscopic steps in Aβ42 aggregation. Here, we exploit this approach to develop a rational drug discovery strategy against Aβ42 aggregation that uses as a read-out the changes in the nucleation and elongation rate constants caused by candidate small molecules. We thus identify a pool of compounds that target specific microscopic steps in Aβ42 aggregation. We then test further these small molecules in human cerebrospinal fluid and in a Caenorhabditis elegans model of AD. Our results show that this strategy represents a powerful approach to identify systematically small molecule lead compounds, thus offering an appealing opportunity to reduce the attrition problem in drug discovery.
Two-step phase-shifting SPIDER
NASA Astrophysics Data System (ADS)
Zheng, Shuiqin; Cai, Yi; Pan, Xinjian; Zeng, Xuanke; Li, Jingzhen; Li, Ying; Zhu, Tianlong; Lin, Qinggang; Xu, Shixiang
2016-09-01
Comprehensive characterization of ultrafast optical field is critical for ultrashort pulse generation and its application. This paper combines two-step phase-shifting (TSPS) into the spectral phase interferometry for direct electric-field reconstruction (SPIDER) to improve the reconstruction of ultrafast optical-fields. This novel SPIDER can remove experimentally the dc portion occurring in traditional SPIDER method by recording two spectral interferograms with π phase-shifting. As a result, the reconstructed results are much less disturbed by the time delay between the test pulse replicas and the temporal widths of the filter window, thus more reliable. What is more, this SPIDER can work efficiently even the time delay is so small or the measured bandwidth is so narrow that strong overlap happens between the dc and ac portions, which allows it to be able to characterize the test pulses with complicated temporal/spectral structures or narrow bandwidths.
Generalized Green's function molecular dynamics for canonical ensemble simulations
NASA Astrophysics Data System (ADS)
Coluci, V. R.; Dantas, S. O.; Tewary, V. K.
2018-05-01
The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.
A training approach to improve stepping automaticity while dual-tasking in Parkinson's disease
Chomiak, Taylor; Watts, Alexander; Meyer, Nicole; Pereira, Fernando V.; Hu, Bin
2017-01-01
Abstract Background: Deficits in motor movement automaticity in Parkinson's disease (PD), especially during multitasking, are early and consistent hallmarks of cognitive function decline, which increases fall risk and reduces quality of life. This study aimed to test the feasibility and potential efficacy of a wearable sensor-enabled technological platform designed for an in-home music-contingent stepping-in-place (SIP) training program to improve step automaticity during dual-tasking (DT). Methods: This was a 4-week prospective intervention pilot study. The intervention uses a sensor system and algorithm that runs off the iPod Touch which calculates step height (SH) in real-time. These measurements were then used to trigger auditory (treatment group, music; control group, radio podcast) playback in real-time through wireless headphones upon maintenance of repeated large amplitude stepping. With small steps or shuffling, auditory playback stops, thus allowing participants to use anticipatory motor control to regain positive feedback. Eleven participants were recruited from an ongoing trial (Trial Number: ISRCTN06023392). Fear of falling (FES-I), general cognitive functioning (MoCA), self-reported freezing of gait (FOG-Q), and DT step automaticity were evaluated. Results: While we found no significant effect of training on FES-I, MoCA, or FOG-Q, we did observe a significant group (music vs podcast) by training interaction in DT step automaticity (P<0.01). Conclusion: Wearable device technology can be used to enable musically-contingent SIP training to increase motor automaticity for people living with PD. The training approach described here can be implemented at home to meet the growing demand for self-management of symptoms by patients. PMID:28151878
Membrane Fusion Induced by Small Molecules and Ions
Mondal Roy, Sutapa; Sarkar, Munna
2011-01-01
Membrane fusion is a key event in many biological processes. These processes are controlled by various fusogenic agents of which proteins and peptides from the principal group. The fusion process is characterized by three major steps, namely, inter membrane contact, lipid mixing forming the intermediate step, pore opening and finally mixing of inner contents of the cells/vesicles. These steps are governed by energy barriers, which need to be overcome to complete fusion. Structural reorganization of big molecules like proteins/peptides, supplies the required driving force to overcome the energy barrier of the different intermediate steps. Small molecules/ions do not share this advantage. Hence fusion induced by small molecules/ions is expected to be different from that induced by proteins/peptides. Although several reviews exist on membrane fusion, no recent review is devoted solely to small moleculs/ions induced membrane fusion. Here we intend to present, how a variety of small molecules/ions act as independent fusogens. The detailed mechanism of some are well understood but for many it is still an unanswered question. Clearer understanding of how a particular small molecule can control fusion will open up a vista to use these moleucles instead of proteins/peptides to induce fusion both in vivo and in vitro fusion processes. PMID:21660306
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.
2014-09-01
We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less
76 FR 19174 - State Trade and Export Promotion (STEP) Pilot Grant Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... SMALL BUSINESS ADMINISTRATION State Trade and Export Promotion (STEP) Pilot Grant Program AGENCY... No. OIT-STEP-2011-01, Modification 1. SUMMARY: Program announcement No. OIT-STEP-2011-01 has been... to the date of application submission for a STEP grant.] Section IV A. 1, Governor's Letter of...
Guerrier, Claire; Holcman, David
2016-10-18
Binding of molecules, ions or proteins to small target sites is a generic step of cell activation. This process relies on rare stochastic events where a particle located in a large bulk has to find small and often hidden targets. We present here a hybrid discrete-continuum model that takes into account a stochastic regime governed by rare events and a continuous regime in the bulk. The rare discrete binding events are modeled by a Markov chain for the encounter of small targets by few Brownian particles, for which the arrival time is Poissonian. The large ensemble of particles is described by mass action laws. We use this novel model to predict the time distribution of vesicular release at neuronal synapses. Vesicular release is triggered by the binding of few calcium ions that can originate either from the synaptic bulk or from the entry through calcium channels. We report here that the distribution of release time is bimodal although it is triggered by a single fast action potential. While the first peak follows a stimulation, the second corresponds to the random arrival over much longer time of ions located in the synaptic terminal to small binding vesicular targets. To conclude, the present multiscale stochastic modeling approach allows studying cellular events based on integrating discrete molecular events over several time scales.
Short bowel mucosal morphology, proliferation and inflammation at first and repeat STEP procedures.
Mutanen, Annika; Barrett, Meredith; Feng, Yongjia; Lohi, Jouko; Rabah, Raja; Teitelbaum, Daniel H; Pakarinen, Mikko P
2018-04-17
Although serial transverse enteroplasty (STEP) improves function of dilated short bowel, a significant proportion of patients require repeat surgery. To address underlying reasons for unsuccessful STEP, we compared small intestinal mucosal characteristics between initial and repeat STEP procedures in children with short bowel syndrome (SBS). Fifteen SBS children, who underwent 13 first and 7 repeat STEP procedures with full thickness small bowel samples at median age 1.5 years (IQR 0.7-3.7) were included. The specimens were analyzed histologically for mucosal morphology, inflammation and muscular thickness. Mucosal proliferation and apoptosis was analyzed with MIB1 and Tunel immunohistochemistry. Median small bowel length increased 42% by initial STEP and 13% by repeat STEP (p=0.05), while enteral caloric intake increased from 6% to 36% (p=0.07) during 14 (12-42) months between the procedures. Abnormal mucosal inflammation was frequently observed both at initial (69%) and additional STEP (86%, p=0.52) surgery. Villus height, crypt depth, enterocyte proliferation and apoptosis as well as muscular thickness were comparable at first and repeat STEP (p>0.05 for all). Patients, who required repeat STEP tended to be younger (p=0.057) with less apoptotic crypt cells (p=0.031) at first STEP. Absence of ileocecal valve associated with increased intraepithelial leukocyte count and reduced crypt cell proliferation index (p<0.05 for both). No adaptive mucosal hyperplasia or muscular alterations occurred between first and repeat STEP. Persistent inflammation and lacking mucosal growth may contribute to continuing bowel dysfunction in SBS children, who require repeat STEP procedure, especially after removal of the ileocecal valve. Level IV, retrospective study. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Abdul Ghani, B.
2005-09-01
"TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.
Brief International Cognitive Assessment for MS (BICAMS): international standards for validation.
Benedict, Ralph H B; Amato, Maria Pia; Boringa, Jan; Brochet, Bruno; Foley, Fred; Fredrikson, Stan; Hamalainen, Paivi; Hartung, Hans; Krupp, Lauren; Penner, Iris; Reder, Anthony T; Langdon, Dawn
2012-07-16
An international expert consensus committee recently recommended a brief battery of tests for cognitive evaluation in multiple sclerosis. The Brief International Cognitive Assessment for MS (BICAMS) battery includes tests of mental processing speed and memory. Recognizing that resources for validation will vary internationally, the committee identified validation priorities, to facilitate international acceptance of BICAMS. Practical matters pertaining to implementation across different languages and countries were discussed. Five steps to achieve optimal psychometric validation were proposed. In Step 1, test stimuli should be standardized for the target culture or language under consideration. In Step 2, examiner instructions must be standardized and translated, including all information from manuals necessary for administration and interpretation. In Step 3, samples of at least 65 healthy persons should be studied for normalization, matched to patients on demographics such as age, gender and education. The objective of Step 4 is test-retest reliability, which can be investigated in a small sample of MS and/or healthy volunteers over 1-3 weeks. Finally, in Step 5, criterion validity should be established by comparing MS and healthy controls. At this time, preliminary studies are underway in a number of countries as we move forward with this international assessment tool for cognition in MS.
Deciding to Decide: How Decisions Are Made and How Some Forces Affect the Process.
McConnell, Charles R
There is a decision-making pattern that applies in all situations, large or small, although in small decisions, the steps are not especially evident. The steps are gathering information, analyzing information and creating alternatives, selecting and implementing an alternative, and following up on implementation. The amount of effort applied in any decision situation should be consistent with the potential consequences of the decision. Essentially, all decisions are subject to certain limitations or constraints, forces, or circumstances that limit one's range of choices. Follow-up on implementation is the phase of decision making most often neglected, yet it is frequently the phase that determines success or failure. Risk and uncertainty are always present in a decision situation, and the application of human judgment is always necessary. In addition, there are often emotional forces at work that can at times unwittingly steer one away from that which is best or most workable under the circumstances and toward a suboptimal result based largely on the desires of the decision maker.
Two-Step Sintering Behavior of Sol-Gel Derived Dense and Submicron-Grained YIG Ceramics
NASA Astrophysics Data System (ADS)
Chen, Ruoyuan; Zhou, Jijun; Zheng, Liang; Zheng, Hui; Zheng, Peng; Ying, Zhihua; Deng, Jiangxia
2018-04-01
In this work, dense and submicron-grain yttrium iron garnet (YIG, Y3Fe5O12) ceramics were fabricated by a two-step sintering (TSS) method using nano-size YIG powder prepared by a citrate sol-gel method. The densification, microstructure, magnetic properties and ferromagnetic resonance (FMR) linewidth of the ceramics were investigated. The sample prepared at 1300°C in T 1, 1225°C in T 2 and 18 h holding time has a density higher than 98% of the theoretical value and exhibits a homogeneous microstructure with fine grain size (0.975 μm). In addition, the saturation magnetization ( M S) of this sample reaches 27.18 emu/g. High density and small grain size can also achieve small FMR linewidth. Consequently, these results show that the sol-gel process combined with the TSS process can effectively suppress grain-boundary migration while maintaining active grain-boundary diffusion to obtain dense and fine-grained YIG ceramics with appropriate magnetic properties.
Pressure-jump small-angle x-ray scattering detected kinetics of staphylococcal nuclease folding.
Woenckhaus, J; Köhling, R; Thiyagarajan, P; Littrell, K C; Seifert, S; Royer, C A; Winter, R
2001-01-01
The kinetics of chain disruption and collapse of staphylococcal nuclease after positive or negative pressure jumps was monitored by real-time small-angle x-ray scattering under pressure. We used this method to probe the overall conformation of the protein by measuring its radius of gyration and pair-distance-distribution function p(r) which are sensitive to the spatial extent and shape of the particle. At all pressures and temperatures tested, the relaxation profiles were well described by a single exponential function. No fast collapse was observed, indicating that the rate limiting step for chain collapse is the same as that for secondary and tertiary structure formation. Whereas refolding at low pressures occurred in a few seconds, at high pressures the relaxation was quite slow, approximately 1 h, due to a large positive activation volume for the rate-limiting step for chain collapse. A large increase in the system volume upon folding implies significant dehydration of the transition state and a high degree of similarity in terms of the packing density between the native and transition states in this system. This study of the time-dependence of the tertiary structure in pressure-induced folding/unfolding reactions demonstrates that novel information about the nature of protein folding transitions and transition states can be obtained from a combination of small-angle x-ray scattering using high intensity synchrotron radiation with the high pressure perturbation technique. PMID:11222312
Transition...One Small Step for a Young Girl, a Giant Leap for an Educational Community
ERIC Educational Resources Information Center
Terry, Shanta
2017-01-01
Can single-gender education foster student success? Can it work in a public setting? These questions have been asked many times over the past few decades. The answers have been inconclusive, with some studies saying that single-gender education truly works and others saying that it does not. The basis of the success or failure of single-gender…
Wakelee, Heather A.; Lee, Ju-Whei; Hanna, Nasser H.; Traynor, Anne M.; Carbone, David P.; Schiller, Joan H.
2012-01-01
Introduction Sorafenib is a raf kinase and angiogenesis inhibitor with activity in multiple cancers. This phase II study in heavily pretreated non-small cell lung cancer (NSCLC) patients (≥ two prior therapies) utilized a randomized discontinuation design. Methods Patients received 400 mg of sorafenib orally twice daily for two cycles (two months) (Step 1). Responding patients on Step 1 continued on sorafenib; progressing patients went off study, and patients with stable disease were randomized to placebo or sorafenib (Step 2), with crossover from placebo allowed upon progression. The primary endpoint of this study was the proportion of patients having stable or responding disease two months after randomization. Results : There were 299 patients evaluated for Step 1 with 81 eligible patients randomized on Step 2 who received sorafenib (n=50) or placebo (n=31). The two-month disease control rates following randomization were 54% and 23% for patients initially receiving sorafenib and placebo respectively, p=0.005. The hazard ratio for progression on Step 2 was 0.51 (95% CI 0.30, 0.87, p=0.014) favoring sorafenib. A trend in favor of overall survival with sorafenib was also observed (13.7 versus 9.0 months from time of randomization), HR 0.67 (95% CI 0.40-1.11), p=0.117. A dispensing error occurred which resulted in unblinding of some patients, but not before completion of the 8 week initial step 2 therapy. Toxicities were manageable and as expected. Conclusions : The results of this randomized discontinuation trial suggest that sorafenib has single agent activity in a heavily pretreated, enriched patient population with advanced NSCLC. These results support further investigation with sorafenib as a single agent in larger, randomized studies in NSCLC. PMID:22982658
The long-term motion of comet Halley
NASA Technical Reports Server (NTRS)
Yeomans, D. K.; Kiang, T.
1981-01-01
The orbital motion of comet Halley is numerically integrated back to 1404 BC. Starting with an orbit based on the 1759, 1682, and 1607 observations of the comet, the integration was run back in time with full planetary perturbations and nongravitational forces taken into account at each 0.5 day time-step. Small empirical corrections were made to the computed perihelion passage time in 837 and to the osculating orbital eccentricity in 800. In nine cases, the perihelion passage times calculated by Kiang (1971) from Chinese observations have been redetermined, and osculating orbital elements are given at each apparition from 1910 back to 1404 BC.
NASA Technical Reports Server (NTRS)
Eppink, Jenna L.
2017-01-01
Stereo particle image velocimetry measurements were performed downstream of a forward-facing step in a stationary-crossflow dominated flow. Three different step heights were studied with the same leading-edge roughness configuration to determine the effect of the step on the evolution of the stationary-crossflow. Above the critical step height, which is approximately 68% of the boundary-layer thickness at the step, the step caused a significant increase in the growth of the stationary crossflow. For the largest step height studied (68%), premature transition occurred shortly downstream of the step. The stationary crossflow amplitude only reached approximately 7% of U(sub e) in this case, which suggests that transition does not occur via the high-frequency secondary instabilities typically associated with stationary crossflow transition. The next largest step of 60% delta still caused a significant impact on the growth of the stationary crossflow downstream of the step, but the amplitude eventually returned to that of the baseline case, and the transition front remained the same. The smallest step height (56%) only caused a small increase in the stationary crossflow amplitude and no change in the transition front. A final case was studied in which the roughness on the leading edge of the model was enhanced for the lowest step height case to determine the impact of the stationary crossflow amplitude on transition. The stationary crossflow amplitude was increased by approximately four times, which resulted in premature transition for this step height. However, some notable differences were observed in the behavior of the stationary crossflow mode, which indicate that the interaction mechanism which results in the increased growth of the stationary crossflow downstream of the step may be different in this case compared to the larger step heights.
NASA Technical Reports Server (NTRS)
Wyman, D.; Steinman, R. M.
1973-01-01
Recently Timberlake, Wyman, Skavenski, and Steinman (1972) concluded in a study of the oculomotor error signal in the fovea that 'the oculomotor dead zone is surely smaller than 10 min and may even be less than 5 min (smaller than the 0.25 to 0.5 deg dead zone reported by Rashbass (1961) with similar stimulus conditions).' The Timberlake et al. speculation is confirmed by demonstrating that the fixating eye consistently and accurately corrects target displacements as small as 3.4 min. The contact lens optical lever technique was used to study the manner in which the oculomotor system responds to small step displacements of the fixation target. Subjects did, without prior practice, use saccades to correct step displacements of the fixation target just as they correct small position errors during maintained fixation.
A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades
Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd
2017-01-01
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter. PMID:28813566
A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades.
Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd
2017-08-01
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.
Small white matter lesion detection in cerebral small vessel disease
NASA Astrophysics Data System (ADS)
Ghafoorian, Mohsen; Karssemeijer, Nico; van Uden, Inge; de Leeuw, Frank E.; Heskes, Tom; Marchiori, Elena; Platel, Bram
2015-03-01
Cerebral small vessel disease (SVD) is a common finding on magnetic resonance images of elderly people. White matter lesions (WML) are important markers for not only the small vessel disease, but also neuro-degenerative diseases including multiple sclerosis, Alzheimer's disease and vascular dementia. Volumetric measurements such as the "total lesion load", have been studied and related to these diseases. With respect to SVD we conjecture that small lesions are important, as they have been observed to grow over time and they form the majority of lesions in number. To study these small lesions they need to be annotated, which is a complex and time-consuming task. Existing (semi) automatic methods have been aimed at volumetric measurements and large lesions, and are not suitable for the detection of small lesions. In this research we established a supervised voxel classification CAD system, optimized and trained to exclusively detect small WMLs. To achieve this, several preprocessing steps were taken, which included a robust standardization of subject intensities to reduce inter-subject intensity variability as much as possible. A number of features that were found to be well identifying small lesions were calculated including multimodal intensities, tissue probabilities, several features for accurate location description, a number of second order derivative features as well as multi-scale annular filter for blobness detection. Only small lesions were used to learn the target concept via Adaboost using random forests as its basic classifiers. Finally the results were evaluated using Free-response receiver operating characteristic.
NASA Astrophysics Data System (ADS)
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.
Omelyan, Igor; Kovalenko, Andriy
2015-04-14
We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD with explicit solvent. We have been able to fold the miniprotein from a fully denatured, extended state in about 60 ns of quasidynamics steered with 3D-RISM-KH mean solvation forces, compared to the average physical folding time of 4-9 μs observed in experiment.
NASA Technical Reports Server (NTRS)
Hudson, Nicolas; Lin, Ying; Barengoltz, Jack
2010-01-01
A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.
Solvable continuous-time random walk model of the motion of tracer particles through porous media.
Fouxon, Itzhak; Holzner, Markus
2016-08-01
We consider the continuous-time random walk (CTRW) model of tracer motion in porous medium flows based on the experimentally determined distributions of pore velocity and pore size reported by Holzner et al. [M. Holzner et al., Phys. Rev. E 92, 013015 (2015)PLEEE81539-375510.1103/PhysRevE.92.013015]. The particle's passing through one channel is modeled as one step of the walk. The step (channel) length is random and the walker's velocity at consecutive steps of the walk is conserved with finite probability, mimicking that at the turning point there could be no abrupt change of velocity. We provide the Laplace transform of the characteristic function of the walker's position and reductions for different cases of independence of the CTRW's step duration τ, length l, and velocity v. We solve our model with independent l and v. The model incorporates different forms of the tail of the probability density of small velocities that vary with the model parameter α. Depending on that parameter, all types of anomalous diffusion can hold, from super- to subdiffusion. In a finite interval of α, ballistic behavior with logarithmic corrections holds, which was observed in a previously introduced CTRW model with independent l and τ. Universality of tracer diffusion in the porous medium is considered.
Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Chaudhri, Anuj; Lukes, Jennifer R.
2010-02-01
The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.
A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann
2003-01-01
A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.
Financing Your Small Business: A Workbook for Financing Small Business.
ERIC Educational Resources Information Center
Compton, Clark W.
Designed to assist established businesspeople with the development of a loan proposal, this workbook offers information on sources of financing and step-by-step guidance on applying for a loan. After chapter I discusses borrowers' and lenders' attitudes towards money, chapter II offers suggestions for determining financial needs. Chapter III lists…
Huang, Wenyong; Ye, Ronghua; Huang, Shengsong; Wang, Decai; Wang, Lanhua; Liu, Bin; Friedman, David S; He, Mingguang; Liu, Yizhi; Congdon, Nathan G
2013-01-01
The perceived difficulty of steps of manual small incision cataract surgery among trainees in rural China was assessed. Cohort study. Fifty-two trainees at the end of a manual small incision cataract surgery training programme. Participants rated the difficulty of 14 surgical steps using a 5-point scale, 1 (very easy) to 5 (very difficult). Demographic and professional information was recorded for trainees. Mean ratings for surgical steps. Questionnaires were completed by 49 trainees (94.2%, median age 38 years, 8 [16.3%] women). Twenty six (53.1%) had performed ≤50 independent cataract surgeries prior to training. Trainees rated cortical aspiration (mean score ± standard deviation = 3.10 ± 1.14) the most difficult step, followed by wound construction (2.76 ± 1.08), nuclear prolapse into the anterior chamber (2.74 ± 1.23) and lens delivery (2.51 ± 1.08). Draping the surgical field (1.06 ± 0.242), anaesthetic block administration (1.14 ± 0.354) and thermal coagulation (1.18 ± 0.441) were rated easiest. In regression models, the score for cortical aspiration was significantly inversely associated with performing >50 independent manual small incision cataract surgery surgeries during training (P = 0.01), but not with age, gender, years of experience in an eye department or total number of cataract surgeries performed prior to training. Cortical aspiration, wound construction and nuclear prolapse pose the greatest challenge for trainees learning manual small incision cataract surgery, and should receive emphasis during training. Number of cases performed is the strongest predictor of perceived difficulty of key steps. © 2013 The Authors. Clinical and Experimental Ophthalmology © 2013 Royal Australian and New Zealand College of Ophthalmologists.
NASA Astrophysics Data System (ADS)
Schoellhamer, David H.; Manning, Andrew J.; Work, Paul A.
2017-06-01
Erodibility of cohesive sediment in the Sacramento-San Joaquin River Delta (Delta) was investigated with an erosion microcosm. Erosion depths in the Delta and in the microcosm were estimated to be about one floc diameter over a range of shear stresses and times comparable to half of a typical tidal cycle. Using the conventional assumption of horizontally homogeneous bed sediment, data from 27 of 34 microcosm experiments indicate that the erosion rate coefficient increased as eroded mass increased, contrary to theory. We believe that small erosion depths, erosion rate coefficient deviation from theory, and visual observation of horizontally varying biota and texture at the sediment surface indicate that erosion cannot solely be a function of depth but must also vary horizontally. We test this hypothesis by developing a simple numerical model that includes horizontal heterogeneity, use it to develop an artificial time series of suspended-sediment concentration (SSC) in an erosion microcosm, then analyze that time series assuming horizontal homogeneity. A shear vane was used to estimate that the horizontal standard deviation of critical shear stress was about 30% of the mean value at a site in the Delta. The numerical model of the erosion microcosm included a normal distribution of initial critical shear stress, a linear increase in critical shear stress with eroded mass, an exponential decrease of erosion rate coefficient with eroded mass, and a stepped increase in applied shear stress. The maximum SSC for each step increased gradually, thus confounding identification of a single well-defined critical shear stress as encountered with the empirical data. Analysis of the artificial SSC time series with the assumption of a homogeneous bed reproduced the original profile of critical shear stress, but the erosion rate coefficient increased with eroded mass, similar to the empirical data. Thus, the numerical experiment confirms the small-depth erosion hypothesis. A linear model of critical shear stress and eroded mass is proposed to simulate small-depth erosion, assuming that the applied and critical shear stresses quickly reach equilibrium.
Schoellhamer, David H.; Manning, Andrew J.; Work, Paul A.
2017-01-01
Erodibility of cohesive sediment in the Sacramento-San Joaquin River Delta (Delta) was investigated with an erosion microcosm. Erosion depths in the Delta and in the microcosm were estimated to be about one floc diameter over a range of shear stresses and times comparable to half of a typical tidal cycle. Using the conventional assumption of horizontally homogeneous bed sediment, data from 27 of 34 microcosm experiments indicate that the erosion rate coefficient increased as eroded mass increased, contrary to theory. We believe that small erosion depths, erosion rate coefficient deviation from theory, and visual observation of horizontally varying biota and texture at the sediment surface indicate that erosion cannot solely be a function of depth but must also vary horizontally. We test this hypothesis by developing a simple numerical model that includes horizontal heterogeneity, use it to develop an artificial time series of suspended-sediment concentration (SSC) in an erosion microcosm, then analyze that time series assuming horizontal homogeneity. A shear vane was used to estimate that the horizontal standard deviation of critical shear stress was about 30% of the mean value at a site in the Delta. The numerical model of the erosion microcosm included a normal distribution of initial critical shear stress, a linear increase in critical shear stress with eroded mass, an exponential decrease of erosion rate coefficient with eroded mass, and a stepped increase in applied shear stress. The maximum SSC for each step increased gradually, thus confounding identification of a single well-defined critical shear stress as encountered with the empirical data. Analysis of the artificial SSC time series with the assumption of a homogeneous bed reproduced the original profile of critical shear stress, but the erosion rate coefficient increased with eroded mass, similar to the empirical data. Thus, the numerical experiment confirms the small-depth erosion hypothesis. A linear model of critical shear stress and eroded mass is proposed to simulate small-depth erosion, assuming that the applied and critical shear stresses quickly reach equilibrium.
NASA Technical Reports Server (NTRS)
Getty, Stephanie A.; Brinckerhoff, William B.; Cornish, Timothy; Li, Xiang; Floyd, Melissa; Arevalo, Ricardo Jr.; Cook, Jamie Elsila; Callahan, Michael P.
2013-01-01
Laser desorption/ionization time-of-flight mass spectrometry (LD-TOF-MS) holds promise to be a low-mass, compact in situ analytical capability for future landed missions to planetary surfaces. The ability to analyze a solid sample for both mineralogical and preserved organic content with laser ionization could be compelling as part of a scientific mission pay-load that must be prepared for unanticipated discoveries. Targeted missions for this instrument capability include Mars, Europa, Enceladus, and small icy bodies, such as asteroids and comets.
Habchi, Johnny; Chia, Sean; Limbocker, Ryan; Mannini, Benedetta; Ahn, Minkoo; Perni, Michele; Hansson, Oskar; Arosio, Paolo; Kumita, Janet R.; Challa, Pavan Kumar; Cohen, Samuel I. A.; Dobson, Christopher M.; Knowles, Tuomas P. J.; Vendruscolo, Michele
2017-01-01
The aggregation of the 42-residue form of the amyloid-β peptide (Aβ42) is a pivotal event in Alzheimer’s disease (AD). The use of chemical kinetics has recently enabled highly accurate quantifications of the effects of small molecules on specific microscopic steps in Aβ42 aggregation. Here, we exploit this approach to develop a rational drug discovery strategy against Aβ42 aggregation that uses as a read-out the changes in the nucleation and elongation rate constants caused by candidate small molecules. We thus identify a pool of compounds that target specific microscopic steps in Aβ42 aggregation. We then test further these small molecules in human cerebrospinal fluid and in a Caenorhabditis elegans model of AD. Our results show that this strategy represents a powerful approach to identify systematically small molecule lead compounds, thus offering an appealing opportunity to reduce the attrition problem in drug discovery. PMID:28011763
Job Design and Ethnic Differences in Working Women’s Physical Activity
Grzywacz, Joseph G.; Crain, A. Lauren; Martinson, Brian C.; Quandt, Sara A.
2014-01-01
Objective To document the role job control and schedule control play in shaping women’s physical activity, and how it delineates educational and racial variability in associations of job and social control with physical activity. Methods Prospective data were obtained from a community-based sample of working women (N = 302). Validated instruments measured job control and schedule control. Steps per day were assessed using New Lifestyles 800 activity monitors. Results Greater job control predicted more steps per day, whereas greater schedule control predicted fewer steps. Small indirect associations between ethnicity and physical activity were observed among women with a trade school degree or less but not for women with a college degree. Conclusions Low job control created barriers to physical activity among working women with a trade school degree or less. Greater schedule control predicted less physical activity, suggesting women do not use time “created” by schedule flexibility for personal health enhancement. PMID:24034681
Job design and ethnic differences in working women's physical activity.
Grzywacz, Joseph G; Crain, A Lauren; Martinson, Brian C; Quandt, Sara A
2014-01-01
To document the role job control and schedule control play in shaping women's physical activity, and how it delineates educational and racial variability in associations of job and social control with physical activity. Prospective data were obtained from a community-based sample of working women (N = 302). Validated instruments measured job control and schedule control. Steps per day were assessed using New Lifestyles 800 activity monitors. Greater job control predicted more steps per day, whereas greater schedule control predicted fewer steps. Small indirect associations between ethnicity and physical activity were observed among women with a trade school degree or less but not for women with a college degree. Low job control created barriers to physical activity among working women with a trade school degree or less. Greater schedule control predicted less physical activity, suggesting women do not use time "created" by schedule flexibility for personal health enhancement.
Application of the optimal homotopy asymptotic method to nonlinear Bingham fluid dampers
NASA Astrophysics Data System (ADS)
Marinca, Vasile; Ene, Remus-Daniel; Bereteu, Liviu
2017-10-01
Dynamic response time is an important feature for determining the performance of magnetorheological (MR) dampers in practical civil engineering applications. The objective of this paper is to show how to use the Optimal Homotopy Asymptotic Method (OHAM) to give approximate analytical solutions of the nonlinear differential equation of a modified Bingham model with non-viscous exponential damping. Our procedure does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. OHAM is very efficient in practice for ensuring very rapid convergence of the solution after only one iteration and with a small number of steps.
Small-scale seismic inversion using surface waves extracted from noise cross correlation.
Gouédard, Pierre; Roux, Philippe; Campillo, Michel
2008-03-01
Green's functions can be retrieved between receivers from the correlation of ambient seismic noise or with an appropriate set of randomly distributed sources. This principle is demonstrated in small-scale geophysics using noise sources generated by human steps during a 10-min walk in the alignment of a 14-m-long accelerometer line array. The time-domain correlation of the records yields two surface wave modes extracted from the Green's function between each pair of accelerometers. A frequency-wave-number Fourier analysis yields each mode contribution and their dispersion curve. These dispersion curves are then inverted to provide the one-dimensional shear velocity of the near surface.
Defense Acquisitions: Assessments of Selected Weapon Programs
2010-03-01
improved availability for small terminals. It is to replace the Ultra High Frequency (UHF) Follow-On ( UFO ) satellite system currently in operation...of MUOS capabilities is time-critical due to the operational failures of two UFO satellites. The MUOS program has taken several steps to address...failures of two UFO satellites. Based on the current health of on-orbit satellites, UHF communication capabilities are predicted to fall below the
Trend assessment: applications for hydrology and climate research
NASA Astrophysics Data System (ADS)
Kallache, M.; Rust, H. W.; Kropp, J.
2005-02-01
The assessment of trends in climatology and hydrology still is a matter of debate. Capturing typical properties of time series, like trends, is highly relevant for the discussion of potential impacts of global warming or flood occurrences. It provides indicators for the separation of anthropogenic signals and natural forcing factors by distinguishing between deterministic trends and stochastic variability. In this contribution river run-off data from gauges in Southern Germany are analysed regarding their trend behaviour by combining a deterministic trend component and a stochastic model part in a semi-parametric approach. In this way the trade-off between trend and autocorrelation structure can be considered explicitly. A test for a significant trend is introduced via three steps: First, a stochastic fractional ARIMA model, which is able to reproduce short-term as well as long-term correlations, is fitted to the empirical data. In a second step, wavelet analysis is used to separate the variability of small and large time-scales assuming that the trend component is part of the latter. Finally, a comparison of the overall variability to that restricted to small scales results in a test for a trend. The extraction of the large-scale behaviour by wavelet analysis provides a clue concerning the shape of the trend.
Demitri, Nevine; Zoubir, Abdelhak M
2017-01-01
Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
Langevin dynamics in inhomogeneous media: Re-examining the Itô-Stratonovich dilemma
NASA Astrophysics Data System (ADS)
Farago, Oded; Grønbech-Jensen, Niels
2014-01-01
The diffusive dynamics of a particle in a medium with space-dependent friction coefficient is studied within the framework of the inertial Langevin equation. In this description, the ambiguous interpretation of the stochastic integral, known as the Itô-Stratonovich dilemma, is avoided since all interpretations converge to the same solution in the limit of small time steps. We use a newly developed method for Langevin simulations to measure the probability distribution of a particle diffusing in a flat potential. Our results reveal that both the Itô and Stratonovich interpretations converge very slowly to the uniform equilibrium distribution for vanishing time step sizes. Three other conventions exhibit significantly improved accuracy: (i) the "isothermal" (Hänggi) convention, (ii) the Stratonovich convention corrected by a drift term, and (iii) a newly proposed convention employing two different effective friction coefficients representing two different averages of the friction function during the time step. We argue that the most physically accurate dynamical description is provided by the third convention, in which the particle experiences a drift originating from the dissipation instead of the fluctuation term. This feature is directly related to the fact that the drift is a result of an inertial effect that cannot be well understood in the Brownian, overdamped limit of the Langevin equation.
Technology: Digital Photography in an Inner-City Fifth Grade, Part 1
ERIC Educational Resources Information Center
Riner, Phil
2005-01-01
Research tells us we can learn complex tasks most easily if they are taught in "small sequential steps." This column is about the small sequential steps that unlocked the powers of digital photography, of portraiture, and of student creativity. The strategies and ideas described in this article came as a result of working with…
ERIC Educational Resources Information Center
Bass, Kristin M.; Drits-Esser, Dina; Stark, Louisa A.
2016-01-01
The credibility of conclusions made about the effectiveness of educational interventions depends greatly on the quality of the assessments used to measure learning gains. This essay, intended for faculty involved in small-scale projects, courses, or educational research, provides a step-by-step guide to the process of developing, scoring, and…
Regression Analysis of a Disease Onset Distribution Using Diagnosis Data
Young, Jessica G.; Jewell, Nicholas P.; Samuels, Steven J.
2008-01-01
Summary We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered. PMID:17680832
NASA Technical Reports Server (NTRS)
Kumar, A.; Graves, R. A., Jr.
1980-01-01
A user's guide is provided for a computer code which calculates the laminar and turbulent hypersonic flows about blunt axisymmetric bodies, such as spherically blunted cones, hyperboloids, etc., at zero and small angles of attack. The code is written in STAR FORTRAN language for the CDC-STAR-100 computer. Time-dependent, viscous-shock-layer-type equations are used to describe the flow field. These equations are solved by an explicit, two-step, time asymptotic, finite-difference method. For the turbulent flow, a two-layer, eddy-viscosity model is used. The code provides complete flow-field properties including shock location, surface pressure distribution, surface heating rates, and skin-friction coefficients. This report contains descriptions of the input and output, the listing of the program, and a sample flow-field solution.
Fisher, Abi; Ucci, Marcella; Smith, Lee; Sawyer, Alexia; Spinney, Richard; Konstantatou, Marina; Marmot, Alexi
2018-06-01
Office-based workers spend a large proportion of the day sitting and tend to have low overall activity levels. Despite some evidence that features of the external physical environment are associated with physical activity, little is known about the influence of the spatial layout of the internal environment on movement, and the majority of data use self-report. This study investigated associations between objectively-measured sitting time and activity levels and the spatial layout of office floors in a sample of UK office-based workers. Participants wore activPAL accelerometers for at least three consecutive workdays. Primary outcomes were steps and proportion of sitting time per working hour. Primary exposures were office spatial layout, which was objectively-measured by deriving key spatial variables: 'distance from each workstation to key office destinations', 'distance from participant's workstation to all other workstations', 'visibility of co-workers', and workstation 'closeness'. 131 participants from 10 organisations were included. Fifty-four per cent were female, 81% were white, and the majority had a managerial or professional role (72%) in their organisation. The average proportion of the working hour spent sitting was 0.7 (SD 0.15); participants took on average 444 (SD 210) steps per working hour. Models adjusted for confounders revealed significant negative associations between step count and distance from each workstation to all other office destinations (e.g., B = -4.66, 95% CI: -8.12, -1.12, p < 0.01) and nearest office destinations (e.g., B = -6.45, 95% CI: -11.88, -0.41, p < 0.05) and visibility of workstations when standing (B = -2.35, 95% CI: -3.53, -1.18, p < 0.001). The magnitude of these associations was small. There were no associations between spatial variables and sitting time per work hour. Contrary to our hypothesis, the further participants were from office destinations the less they walked, suggesting that changing the relative distance between workstations and other destinations on the same floor may not be the most fruitful target for promoting walking and reducing sitting in the workplace. However, reported effect sizes were very small and based on cross-sectional analyses. The approaches developed in this study could be applied to other office buildings to establish whether a specific office typology may yield more promising results.
Small High-Speed Self-Acting Shaft Seals for Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Burcham, R. E.; Boynton, J. L.
1977-01-01
Design analysis, fabrication, and experimental evaluation were performed on three self-acting facetype LOX seal designs and one circumferential-type helium deal design. The LOX seals featured Rayleigh step lift pad and spiral groove geometry for lift augmentation. Machined metal bellows and piston ring secondary seal designs were tested. The helium purge seal featured floating rings with Rayleigh step lift pads. The Rayleigh step pad piston ring and the spiral groove LOX seals were successfully tested for approximately 10 hours in liquid oxygen. The helium seal was successfully tested for 24 hours. The shrouded Rayleigh step hydrodynamic lift pad LOX seal is feasible for advanced, small, high-speed oxygen turbopumps.
Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.
Small Steps Lead to Quality Assurance and Enhancement in Qatar University
ERIC Educational Resources Information Center
Al Attiyah, Asma; Khalifa, Batoul
2009-01-01
This paper presents a brief overview of Qatar University's history since it was started in 1973. Its primary focus is on the various small, but important, steps taken by the University to address the needs of quality assurance and enhancement. The Qatar University Reform Plan is described in detail. Its aims are to continually improve the quality…
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Applicability of corrosion control treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper...
Coq Tacticals and PVS Strategies: A Small Step Semantics
NASA Technical Reports Server (NTRS)
Kirchner, Florent
2003-01-01
The need for a small step semantics and more generally for a thorough documentation and understanding of Coq's tacticals and PVS's strategies arise with their growing use and the progressive uncovering of their subtleties. The purpose of the following study is to provide a simple and clear formal framework to describe their detailed semantics, and highlight their differences and similarities.
Equal-mobility bed load transport in a small, step-pool channel in the Ouachita Mountains
Daniel A. Marion; Frank Weirich
2003-01-01
Abstract: Equal-mobility transport (EMT) of bed load is more evident than size-selective transport during near-bankfull flow events in a small, step-pool channel in the Ouachita Mountains of central Arkansas. Bed load transport modes were studied by simulating five separate runoff events with peak discharges between 0.25 and 1.34 m3...
DYCAST: A finite element program for the crash analysis of structures
NASA Technical Reports Server (NTRS)
Pifko, A. B.; Winter, R.; Ogilvie, P.
1987-01-01
DYCAST is a nonlinear structural dynamic finite element computer code developed for crash simulation. The element library contains stringers, beams, membrane skin triangles, plate bending triangles and spring elements. Changing stiffnesses in the structure are accounted for by plasticity and very large deflections. Material nonlinearities are accommodated by one of three options: elastic-perfectly plastic, elastic-linear hardening plastic, or elastic-nonlinear hardening plastic of the Ramberg-Osgood type. Geometric nonlinearities are handled in an updated Lagrangian formulation by reforming the structure into its deformed shape after small time increments while accumulating deformations, strains, and forces. The nonlinearities due to combined loadings are maintained, and stiffness variation due to structural failures are computed. Numerical time integrators available are fixed-step central difference, modified Adams, Newmark-beta, and Wilson-theta. The last three have a variable time step capability, which is controlled internally by a solution convergence error measure. Other features include: multiple time-load history tables to subject the structure to time dependent loading; gravity loading; initial pitch, roll, yaw, and translation of the structural model with respect to the global system; a bandwidth optimizer as a pre-processor; and deformed plots and graphics as post-processors.
Do low step count goals inhibit walking behavior: a randomized controlled study.
Anson, Denis; Madras, Diane
2016-07-01
Confirmation and quantification of observed differences in goal-directed walking behavior. Single-blind, split-half randomized trial. Small rural university, Pennsylvania, United States. A total of 94 able-bodied subjects (self-selected volunteer students, faculty and staff of a small university) were randomly assigned walking goals, and 53 completed the study. Incentivized pedometer-monitored program requiring recording the step-count for 56-days into a custom-made website providing daily feedback. Steps logged per day. During the first half of the study, the 5000 and 10,000 step group logged significantly different steps 7500 and 9000, respectively (P > 0.05). During the second half of the study, the 5000 and 10,000 step groups logged 7000 and 8600 steps, respectively (significance P > 0.05). The group switched from 5000 to →10,000 steps logged, 7900 steps for the first half and 9500 steps for the second half (significance P > 0.05). The group switched from 10,000 to 5000 steps logged 9700 steps for the first half and 9000 steps for the second half, which was significant (p > 0.05). Levels of walking behavior are influenced by the goals assigned. Subjects with high goals walk more than those with low goals, even if they do not meet the assigned goal. Reducing goals from a high to low level can reduce walking behavior. © The Author(s) 2015.
Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations
Casulli, V.; Cheng, R.T.
1990-01-01
In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.
Adjustment to time of use pricing: Persistence of habits or change
NASA Astrophysics Data System (ADS)
Rebello, Derrick Michael
1999-11-01
Generally the dynamics related to residential electricity consumption under TOU rates have not been analyzed completely. A habit persistence model is proposed to account for the dynamics that may be present as a result of recurring habits or lack of information about the effects of shifting load across TOU periods. In addition, the presence of attrition bias necessitated a two-step estimation approach. The decision to remain in the program modeled in the first-step, while demand for electricity was estimated in the second-step. Results show that own-price effects and habit persistence have the most significant effect the model. The habit effects, which while small in absolute terms, are significant. Elasticity estimates show that electricity consumption is inelastic during all periods of the day. Estimates of the long-run elasticities were nearly identical to short-run estimates, showing little or no adjustment across time. Cross-price elasticities indicate a willingness to substitute consumption across periods implying that TOU goods are weak substitutes. The most significant substitution occurs during the period of 5:00 PM to 9:00 PM, when most individuals are likely to be home and active.
CR-100 synthetic zeolite adsorption characteristics toward Northern Banat groundwater ammonia.
Tomić, Željko; Kukučka, Miroslav; Stojanović, Nikoleta Kukučka; Kukučka, Andrej; Jokić, Aleksandar
2016-10-14
The adsorption characteristics of synthetic zeolite CR-100 in a fixed-bed system using continuous flow of groundwater containing elevated ammonia concentration were examined. The possibilities for adsorbent mass calculation throughout mass transfer zone using novel mathematical approach as well as zeolite adsorption capacity at every sampling point in time or effluent volume were determined. The investigated adsorption process consisted of three clearly separated steps indicated to sorption kinetics. The first step was characterized by decrease and small changes in effluent ammonia concentration vs. experiment time and quantity of adsorbed ammonia per mass unit of zeolite. The consequences of this phenomenon were showed in the plots of the Freundlich and the Langmuir isotherm models through a better linear correlation according as graphical points contingent to the first step were not accounted. The Temkin and the Dubinin-Radushkevich isotherm models showed the opposite tendency with better fitting for overall measurements. According to the obtained isotherms parameter data, the investigated process was found to be multilayer physicochemical adsorption, and also that synthetic zeolite CR-100 is a promising material for removal of ammonia from Northern Banat groundwater with an ammonia removal efficiency of 90%.
When the mean is not enough: Calculating fixation time distributions in birth-death processes.
Ashcroft, Peter; Traulsen, Arne; Galla, Tobias
2015-10-01
Studies of fixation dynamics in Markov processes predominantly focus on the mean time to absorption. This may be inadequate if the distribution is broad and skewed. We compute the distribution of fixation times in one-step birth-death processes with two absorbing states. These are expressed in terms of the spectrum of the process, and we provide different representations as forward-only processes in eigenspace. These allow efficient sampling of fixation time distributions. As an application we study evolutionary game dynamics, where invading mutants can reach fixation or go extinct. We also highlight the median fixation time as a possible analog of mixing times in systems with small mutation rates and no absorbing states, whereas the mean fixation time has no such interpretation.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
A glitch in the millisecond pulsar J0613-0200
NASA Astrophysics Data System (ADS)
McKee, J. W.; Janssen, G. H.; Stappers, B. W.; Lyne, A. G.; Caballero, R. N.; Lentati, L.; Desvignes, G.; Jessner, A.; Jordan, C. A.; Karuppusamy, R.; Kramer, M.; Cognard, I.; Champion, D. J.; Graikou, E.; Lazarus, P.; Osłowski, S.; Perrodin, D.; Shaifullah, G.; Tiburzi, C.; Verbiest, J. P. W.
2016-09-01
We present evidence for a small glitch in the spin evolution of the millisecond pulsar J0613-0200, using the EPTA Data Release 1.0, combined with Jodrell Bank analogue filterbank times of arrival (TOAs) recorded with the Lovell telescope and Effelsberg Pulsar Observing System TOAs. A spin frequency step of 0.82(3) nHz and frequency derivative step of -1.6(39) × 10-19 Hz s-1 are measured at the epoch of MJD 50888(30). After PSR B1821-24A, this is only the second glitch ever observed in a millisecond pulsar, with a fractional size in frequency of Δν/ν = 2.5(1) × 10-12, which is several times smaller than the previous smallest glitch. PSR J0613-0200 is used in gravitational wave searches with pulsar timing arrays, and is to date only the second such pulsar to have experienced a glitch in a combined 886 pulsar-years of observations. We find that accurately modelling the glitch does not impact the timing precision for pulsar timing array applications. We estimate that for the current set of millisecond pulsars included in the International Pulsar Timing Array, there is a probability of ˜50 per cent that another glitch will be observed in a timing array pulsar within 10 years.
Smart Steps to Sustainability 2.0
Smart Steps to Sustainability provides small business owners and managers with practical advice and tools to implementsustainable and environmentally-preferable business practices that go beyond compliance.
Sensitivity to timing and order in human visual cortex
Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.
2014-01-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton, M.J.; Bourke, W.; Browning, G.L.
The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less
Ignition dynamics of a laminar diffusion flame in the field of a vortex embedded in a shear flow
NASA Technical Reports Server (NTRS)
Macaraeg, Michele G.; Jackson, T. L.; Hussaini, M. Y.
1994-01-01
The role of streamwise-spanwise vorticity interactions that occur in turbulent shear flows on flame/vortex interactions is examined by means of asymptotic analysis and numerical simulation in the limit of small Mach number. An idealized model is employed to describe the interaction process. The model consists of a one-step, irreversible Arrhenius reaction between initially unmixed species occupying adjacent half-planes which are then allowed to mix and react in the presence of a streamwise vortex embedded in a shear flow. It is found that the interaction of the streamwise vortex with shear gives rise to small-scale velocity oscillations which increase in magnitude with shear strength. These oscillations give rise to regions of strong temperature gradients via viscous heating, which can lead to multiple ignition points and substantially decrease ignition times. The evolution in time of the temperature and mass-fraction fields is followed, and emphasis is placed on the ignition time and structure as a function of vortex and shear strength.
A small, linear, piezoelectric ultrasonic cryomotor
NASA Astrophysics Data System (ADS)
Dong, Shuxiang; Yan, Li; Wang, Naigang; Viehland, Dwight; Jiang, Xiaoning; Rehrig, Paul; Hackenberger, Wes
2005-01-01
A small, linear-type, piezoelectric ultrasonic cryomotor has been developed for precision positioning at extremely low temperatures (⩾-200°C). This cryomotor consists of a pair of Pb(Mg1/3Nb2/3)O3-PbTiO3 single crystal stacks, which are piezoelectrically excited into the rotating third-bending mode of the cryomotor stator's center, which in turn drives a contacted slider into linear motion via frictional forces. The performance characteristics achieved by the cryomotor are: (i) a maximum linear speed of >50mm /s; (ii) a stroke of >10mm; (iii) a driving force of >0.2N; (iv) a response time of ˜29ms; and (v) a step resolution of ˜20nm.
Role of delay-based reward in the spatial cooperation
NASA Astrophysics Data System (ADS)
Wang, Xu-Wen; Nie, Sen; Jiang, Luo-Luo; Wang, Bing-Hong; Chen, Shi-Ming
2017-01-01
Strategy selection in games, a typical decision making, usually brings noticeable reward for players which have discounted value if the delay appears. The discounted value is measure: earning sooner with a small reward or later with a delayed larger reward. Here, we investigate effects of delayed rewards on the cooperation in structured population. It is found that delayed reward supports the spreading of cooperation in square lattice, small-world and random networks. In particular, intermediate reward differences between delays impel the highest cooperation level. Interestingly, cooperative individuals with the same delay time steps form clusters to resist the invasion of defects, and cooperative individuals with lowest delay reward survive because they form the largest clusters in the lattice.
Emergence of small-world structure in networks of spiking neurons through STDP plasticity.
Basalyga, Gleb; Gleiser, Pablo M; Wennekers, Thomas
2011-01-01
In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.
Seakeeping with the semi-Lagrangian particle finite element method
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio
2017-07-01
The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.
Small Unix data acquisition system
NASA Astrophysics Data System (ADS)
Engberg, D.; Glanzman, T.
1994-02-01
A R&D program has been established to investigate the use of Unix in the various aspects of experimental computation. Earlier R&D work investigated the basic real-time aspects of the IBMRS/6000 workstation running AIX, which claims to be a real-time operating system. The next step in this R&D is the construction of prototype data acquisition system which attempts to exercise many of the features needed in the final on-line system in a realistic situation. For this project, we have combined efforts with a team studying the use of novel cell designs and gas mixtures in a new prototype drift chamber.
Self-calibration of robot-sensor system
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1990-01-01
The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.
Step-and-Repeat Nanoimprint-, Photo- and Laser Lithography from One Customised CNC Machine.
Greer, Andrew Im; Della-Rosa, Benoit; Khokhar, Ali Z; Gadegaard, Nikolaj
2016-12-01
The conversion of a computer numerical control machine into a nanoimprint step-and-repeat tool with additional laser- and photolithography capacity is documented here. All three processes, each demonstrated on a variety of photoresists, are performed successfully and analysed so as to enable the reader to relate their known lithography process(es) to the findings. Using the converted tool, 1 cm(2) of nanopattern may be exposed in 6 s, over 3300 times faster than the electron beam equivalent. Nanoimprint tools are commercially available, but these can cost around 1000 times more than this customised computer numerical control (CNC) machine. The converted equipment facilitates rapid production and large area micro- and nanoscale research on small grants, ultimately enabling faster and more diverse growth in this field of science. In comparison to commercial tools, this converted CNC also boasts capacity to handle larger substrates, temperature control and active force control, up to ten times more curing dose and compactness. Actual devices are fabricated using the machine including an expanded nanotopographic array and microfluidic PDMS Y-channel mixers.
Step-and-Repeat Nanoimprint-, Photo- and Laser Lithography from One Customised CNC Machine
NASA Astrophysics Data System (ADS)
Greer, Andrew IM; Della-Rosa, Benoit; Khokhar, Ali Z.; Gadegaard, Nikolaj
2016-03-01
The conversion of a computer numerical control machine into a nanoimprint step-and-repeat tool with additional laser- and photolithography capacity is documented here. All three processes, each demonstrated on a variety of photoresists, are performed successfully and analysed so as to enable the reader to relate their known lithography process(es) to the findings. Using the converted tool, 1 cm2 of nanopattern may be exposed in 6 s, over 3300 times faster than the electron beam equivalent. Nanoimprint tools are commercially available, but these can cost around 1000 times more than this customised computer numerical control (CNC) machine. The converted equipment facilitates rapid production and large area micro- and nanoscale research on small grants, ultimately enabling faster and more diverse growth in this field of science. In comparison to commercial tools, this converted CNC also boasts capacity to handle larger substrates, temperature control and active force control, up to ten times more curing dose and compactness. Actual devices are fabricated using the machine including an expanded nanotopographic array and microfluidic PDMS Y-channel mixers.
Näreoja, Tuomas; Rosenholm, Jessica M; Lamminmäki, Urpo; Hänninen, Pekka E
2017-05-01
Thyrotropin or thyroid-stimulating hormone (TSH) is used as a marker for thyroid function. More precise and more sensitive immunoassays are needed to facilitate continuous monitoring of thyroid dysfunctions and to assess the efficacy of the selected therapy and dosage of medication. Moreover, most thyroid diseases are autoimmune diseases making TSH assays very prone to immunoassay interferences due to autoantibodies in the sample matrix. We have developed a super-sensitive TSH immunoassay utilizing nanoparticle labels with a detection limit of 60 nU L -1 in preprocessed serum samples by reducing nonspecific binding. The developed preprocessing step by affinity purification removed interfering compounds and improved the recovery of spiked TSH from serum. The sensitivity enhancement was achieved by stabilization of the protein corona of the nanoparticle bioconjugates and a spot-coated configuration of the active solid-phase that reduced sedimentation of the nanoparticle bioconjugates and their contact time with antibody-coated solid phase, thus making use of the higher association rate of specific binding due to high avidity nanoparticle bioconjugates. Graphical Abstract We were able to decrease the lowest limit of detection and increase sensitivity of TSH immunoassay using Eu(III)-nanoparticles. The improvement was achieved by decreasing binding time of nanoparticle bioconjugates by small capture area and fast circular rotation. Also, we applied a step to stabilize protein corona of the nanoparticles and a serum-preprocessing step with a structurally related antibody.
NASA Astrophysics Data System (ADS)
Ebinger, C. J.; Tiberi, C.; Fowler, M. R.; Hunegnaw, A.
2001-12-01
The southern Afar depression, Africa, is virtually the only area worldwide where the transition from continental rifting to seafloor spreading is exposed onshore. During mid-Miocene to Pleistocene time the rift valley was segmented along its length by long normal faults; since Pleistocene time, faulting and magmatism have jumped to a narrow ca. 60 km-long volcanic mound marked by small faults. These magmatic segments are structurally similar to slow-spreading mid-ocean ridges, yet the rift is floored by continental crust. As part of the Ethiopia Afar Geoscientific Lithospheric Experiment (EAGLE), we examine new and existing Bouguer gravity anomaly data from the rift to study the modification of the lithosphere by extensional and magmatic processes. New and existing Bouguer gravity anomaly data also show an along-axis segmentation of elongate relative positive anomalies that coincide with the magmatic segments. These anomalies are superposed on a regionally eastward increasing field as one approaches true seafloor spreading in the Gulf of Aden, and crustal thickness decreases. Quite remarkably, the magmatic segment boundaries, where data coverage is good, are marked by 15-25 mGal steps. The amplitude of the along-axis steps, as well as their across-axis characteristics, indicate that magmatic intrusion and ca. 2 km relief at the crust-mantle interface contribute to the steps. We use inverse and forward models of gravity data constrained by existing seismic and petrological data to evaluate models for the along-axis steps. EAGLE seismic data will be acquired across and along the magmatic segments to improve our understanding of breakup processes.
Variational PDE Models in Image Processing
2002-07-31
161–168, 2001. [22] T. F. Chan and L. A. Vese. Active contour and segmentation models using ge- ometric PDE’s for medical imaging. Malladi , R . (Ed...continuous “movie” NMOPQ (with some small time step R ), D E >LK @ ’s are the estimated optical flows (i.e. velocity fields) at each moment. During...Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. Image inpainting. Computer Graphics, SIGGRAPH 2000, July, 2000. [6] G. Birkhoff and C. R . De Boor
Rehabilitation Associate Training for Employed Staff. Task Analysis (RA-2).
ERIC Educational Resources Information Center
Davis, Michael J.; Jensen, Mary
This learning module, which is intended for use in in-service training for vocational rehabilitation counselors, deals with writing a task analysis. Step-by-step guidelines are provided for breaking down a task into small teachable steps by analyzing the task in terms of the way in which it will be performed once learned (method), the steps to be…
49 CFR 26.39 - Fostering small business participation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 1 2014-10-01 2014-10-01 false Fostering small business participation. 26.39... Requirements for DBE Programs for Federally-Assisted Contracting § 26.39 Fostering small business participation... competition by small business concerns, taking all reasonable steps to eliminate obstacles to their...
49 CFR 26.39 - Fostering small business participation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 1 2013-10-01 2013-10-01 false Fostering small business participation. 26.39... Requirements for DBE Programs for Federally-Assisted Contracting § 26.39 Fostering small business participation... competition by small business concerns, taking all reasonable steps to eliminate obstacles to their...
Forward-facing steps induced transition in a subsonic boundary layer
NASA Astrophysics Data System (ADS)
Zh, Hui; Fu, Song
2017-10-01
A forward-facing step (FFS) immersed in a subsonic boundary layer is studied through a high-order flux reconstruction (FR) method to highlight the flow transition induced by the step. The step height is a third of the local boundary-layer thickness. The Reynolds number based on the step height is 720. Inlet disturbances are introduced giving rise to streamwise vortices upstream of the step. It is observed that these small-scale streamwise structures interact with the step and hairpin vortices are quickly developed after the step leading to flow transition in the boundary layer.
Controlling superconductivity in La 2-xSr xCuO 4+δ by ozone and vacuum annealing
Leng, Xiang; Bozovic, Ivan
2014-11-21
In this study we performed a series of ozone and vacuum annealing experiments on epitaxial La 2-xSr xCuO 4+δ thin films. The transition temperature after each annealing step has been measured by the mutual inductance technique. The relationship between the effective doping and the vacuum annealing time has been studied. Short-time ozone annealing at 470 °C oxidizes an underdoped film all the way to the overdoped regime. The subsequent vacuum annealing at 350 °C to 380 °C slowly brings the sample across the optimal doping point back to the undoped, non-superconducting state. Several ozone and vacuum annealing cycles have beenmore » done on the same sample and the effects were found to be repeatable and reversible Vacuum annealing of ozone-loaded LSCO films is a very controllable process, allowing one to tune the doping level of LSCO in small steps across the superconducting dome, which can be used for fundamental physics studies.« less
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
Compressible, multiphase semi-implicit method with moment of fluid interface representation
Jemison, Matthew; Sussman, Mark; Arienti, Marco
2014-09-16
A unified method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The deforming material boundaries are represented using the moment-of-fluid method. Our new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows including stiff materials; enabling large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Moreover, shocks are captured and material discontinuities aremore » tracked, without the aid of any approximate or exact Riemann solvers. As a result, wimulations of underwater explosions and fluid jetting in one, two, and three dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large “impedance mismatch.”« less
Controlling superconductivity in La 2-xSr xCuO 4+δ by ozone and vacuum annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leng, Xiang; Bozovic, Ivan
In this study we performed a series of ozone and vacuum annealing experiments on epitaxial La 2-xSr xCuO 4+δ thin films. The transition temperature after each annealing step has been measured by the mutual inductance technique. The relationship between the effective doping and the vacuum annealing time has been studied. Short-time ozone annealing at 470 °C oxidizes an underdoped film all the way to the overdoped regime. The subsequent vacuum annealing at 350 °C to 380 °C slowly brings the sample across the optimal doping point back to the undoped, non-superconducting state. Several ozone and vacuum annealing cycles have beenmore » done on the same sample and the effects were found to be repeatable and reversible Vacuum annealing of ozone-loaded LSCO films is a very controllable process, allowing one to tune the doping level of LSCO in small steps across the superconducting dome, which can be used for fundamental physics studies.« less
Pushing particles in extreme fields
NASA Astrophysics Data System (ADS)
Gordon, Daniel F.; Hafizi, Bahman; Palastro, John
2017-03-01
The update of the particle momentum in an electromagnetic simulation typically employs the Boris scheme, which has the advantage that the magnetic field strictly performs no work on the particle. In an extreme field, however, it is found that onerously small time steps are required to maintain accuracy. One reason for this is that the operator splitting scheme fails. In particular, even if the electric field impulse and magnetic field rotation are computed exactly, a large error remains. The problem can be analyzed for the case of constant, but arbitrarily polarized and independent electric and magnetic fields. The error can be expressed in terms of exponentials of nested commutators of the generators of boosts and rotations. To second order in the field, the Boris scheme causes the error to vanish, but to third order in the field, there is an error that has to be controlled by decreasing the time step. This paper introduces a scheme that avoids this problem entirely, while respecting the property that magnetic fields cannot change the particle energy.
An Investigation into the Relation between the Technique of Movement and Overload in Step Aerobics
Wysocka, Katarzyna
2017-01-01
The aim of this research was to determine the features of a step workout technique which may be related to motor system overloading in step aerobics. Subjects participating in the research were instructors (n = 15) and students (n = 15) without any prior experience in step aerobics. Kinematic and kinetic data was collected with the use of the BTS SMART system comprised of 6 calibrated video cameras and two Kistler force plates. The subjects' task was to perform basic steps. The following variables were analyzed: vertical, anteroposterior, and mediolateral ground reaction forces; foot flexion and abduction and adduction angles; knee joint flexion angle; and trunk flexion angle in the sagittal plane. The angle of a foot adduction recorded for the instructors was significantly smaller than that of the students. The knee joint angle while stepping up was significantly higher for the instructors compared to that for the students. Our research confirmed that foot dorsal flexion and adduction performed while stepping up increased load on the ankle joint. Both small and large angles of knee flexion while stepping up and down resulted in knee joint injuries. A small trunk flexion angle in the entire cycle of step workout shut down dorsal muscles, which stopped suppressing the load put on the spine. PMID:28348501
Pu, Jinji; Guo, Jianrong; Fan, Zaifeng
2014-01-01
Small RNAs, including microRNAs (miRNAs) and small interfering RNAs (siRNAs), are important regulators of plant development and gene expression. The acquisition of high-quality small RNAs is the first step in the study of its expression and function analysis, yet the extraction method of small RNAs in recalcitrant plant tissues with various secondary metabolites is not well established, especially for tropical and subtropical plant species rich in polysaccharides and polyphenols. Here, we developed a simple and efficient method for high quality small RNAs extraction from recalcitrant plant species. Prior to RNA isolation, a precursory step with a CTAB-PVPP buffer system could efficiently remove compounds and secondary metabolites interfering with RNAs from homogenized lysates. Then, total RNAs were extracted by Trizol reagents followed by a differential precipitation of high-molecular-weight (HMW) RNAs using polyethylene glycol (PEG) 8000. Finally, small RNAs could be easily recovered from supernatant by ethanol precipitation without extra elimination steps. The isolated small RNAs from papaya showed high quality through a clear background on gel and a distinct northern blotting signal with miR159a probe, compared with other published protocols. Additionally, the small RNAs extracted from papaya were successfully used for validation of both predicted miRNAs and the putative conserved tasiARFs. Furthermore, the extraction method described here was also tested with several other subtropical and tropical plant tissues. The purity of the isolated small RNAs was sufficient for such applications as end-point stem-loop RT-PCR and northern blotting analysis, respectively. The simple and feasible extraction method reported here is expected to have excellent potential for isolation of small RNAs from recalcitrant plant tissues rich in polyphenols and polysaccharides. PMID:24787387
Kogi, Kazutaka
2006-01-01
Participatory programmes for occupational risk reduction are gaining importance particularly in small workplaces in both industrially developing and developed countries. To discuss the types of effective support, participatory steps commonly seen in our "work improvement-Asia" network are reviewed. The review covered training programmes for small enterprises, farmers, home workers and trade union members. Participatory steps commonly focusing on low-cost good practices locally achieved have led to concrete improvements in multiple technical areas including materials handling, workstation ergonomics, physical environment and work organization. These steps take advantage of positive features of small workplaces in two distinct ways. First, local key persons are ready to accept local good practices conveyed through personal, informal approaches. Second, workers and farmers are capable of understanding technical problems affecting routine work and taking flexible actions leading to solving them. This process is facilitated by the use of locally adjusted training tools such as local good examples, action checklists and group work methods. It is suggested that participatory occupational health programmes can work in small workplaces when they utilize low-cost good practices in a flexible manner. Networking of these positive experiences is essential.
NASA Technical Reports Server (NTRS)
Lawson, Larry
2003-01-01
It was critical for our team to find a radically different way of doing business. Deciding to build the airframe out of composites was the first step, refining processes from the boat building industry was second, and the final step was choosing a supplier. Lockheed Martin built the first prototypes at our Skunk Works facility in Palmdale, California. These units were hand-built and used early prototypical tooling. They looked great but were not affordable. We had to focus on minimizing touch labor and cycle time and reducing material costs. We needed a company to produce the composite quilts we would use to avoid hand lay-ups. The company we found surprised a lot of people. We partnered with a small company outside of Boston whose primary business was making baseball bats and golf club shafts.
Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen
2016-03-31
In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less
Real‐time monitoring and control of the load phase of a protein A capture step
Rüdt, Matthias; Brestrich, Nina; Rolinger, Laura
2016-01-01
ABSTRACT The load phase in preparative Protein A capture steps is commonly not controlled in real‐time. The load volume is generally based on an offline quantification of the monoclonal antibody (mAb) prior to loading and on a conservative column capacity determined by resin‐life time studies. While this results in a reduced productivity in batch mode, the bottleneck of suitable real‐time analytics has to be overcome in order to enable continuous mAb purification. In this study, Partial Least Squares Regression (PLS) modeling on UV/Vis absorption spectra was applied to quantify mAb in the effluent of a Protein A capture step during the load phase. A PLS model based on several breakthrough curves with variable mAb titers in the HCCF was successfully calibrated. The PLS model predicted the mAb concentrations in the effluent of a validation experiment with a root mean square error (RMSE) of 0.06 mg/mL. The information was applied to automatically terminate the load phase, when a product breakthrough of 1.5 mg/mL was reached. In a second part of the study, the sensitivity of the method was further increased by only considering small mAb concentrations in the calibration and by subtracting an impurity background signal. The resulting PLS model exhibited a RMSE of prediction of 0.01 mg/mL and was successfully applied to terminate the load phase, when a product breakthrough of 0.15 mg/mL was achieved. The proposed method has hence potential for the real‐time monitoring and control of capture steps at large scale production. This might enhance the resin capacity utilization, eliminate time‐consuming offline analytics, and contribute to the realization of continuous processing. Biotechnol. Bioeng. 2017;114: 368–373. © 2016 The Authors. Biotechnology and Bioengineering published by Wiley Periodicals, Inc. PMID:27543789
Basu, Amar S
2013-05-21
Emerging assays in droplet microfluidics require the measurement of parameters such as drop size, velocity, trajectory, shape deformation, fluorescence intensity, and others. While micro particle image velocimetry (μPIV) and related techniques are suitable for measuring flow using tracer particles, no tool exists for tracking droplets at the granularity of a single entity. This paper presents droplet morphometry and velocimetry (DMV), a digital video processing software for time-resolved droplet analysis. Droplets are identified through a series of image processing steps which operate on transparent, translucent, fluorescent, or opaque droplets. The steps include background image generation, background subtraction, edge detection, small object removal, morphological close and fill, and shape discrimination. A frame correlation step then links droplets spanning multiple frames via a nearest neighbor search with user-defined matching criteria. Each step can be individually tuned for maximum compatibility. For each droplet found, DMV provides a time-history of 20 different parameters, including trajectory, velocity, area, dimensions, shape deformation, orientation, nearest neighbour spacing, and pixel statistics. The data can be reported via scatter plots, histograms, and tables at the granularity of individual droplets or by statistics accrued over the population. We present several case studies from industry and academic labs, including the measurement of 1) size distributions and flow perturbations in a drop generator, 2) size distributions and mixing rates in drop splitting/merging devices, 3) efficiency of single cell encapsulation devices, 4) position tracking in electrowetting operations, 5) chemical concentrations in a serial drop dilutor, 6) drop sorting efficiency of a tensiophoresis device, 7) plug length and orientation of nonspherical plugs in a serpentine channel, and 8) high throughput tracking of >250 drops in a reinjection system. Performance metrics show that highest accuracy and precision is obtained when the video resolution is >300 pixels per drop. Analysis time increases proportionally with video resolution. The current version of the software provides throughputs of 2-30 fps, suggesting the potential for real time analysis.
A transient response analysis of the space shuttle vehicle during liftoff
NASA Technical Reports Server (NTRS)
Brunty, J. A.
1990-01-01
A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.
Pop, Laura A; Pileczki, Valentina; Cojocneanu-Petric, Roxana M; Petrut, Bogdan; Braicu, Cornelia; Jurj, Ancuta M; Buiga, Rares; Achimas-Cadariu, Patriciu; Berindan-Neagoe, Ioana
2016-01-01
Sample processing is a crucial step for all types of genomic studies. A major challenge for researchers is to understand and predict how RNA quality affects the identification of transcriptional differences (by introducing either false-positive or false-negative errors). Nanotechnologies help improve the quality and quantity control for gene expression studies. The study was performed on 14 tumor and matched normal pairs of tissue from patients with bladder urothelial carcinomas. We assessed the RNA quantity by using the NanoDrop spectrophotometer and the quality by nano-microfluidic capillary electrophoresis technology provided by Agilent 2100 Bioanalyzer. We evaluated the amplification status of three housekeeping genes and one small nuclear RNA gene using the ViiA 7 platform, with specific primers. Every step of the sample handling protocol, which begins with sample harvest and ends with the data analysis, is of utmost importance due to the fact that it is time consuming, labor intensive, and highly expensive. High temperature of the surgical procedure does not affect the small nucleic acid sequences in comparison with the mRNA. Gene expression is clearly affected by the RNA quality, but less affected in the case of small nuclear RNAs. We proved that the high-temperature, highly invasive transurethral resection of bladder tumor procedure damages the tissue and affects the integrity of the RNA from biological specimens.
Pop, Laura A; Pileczki, Valentina; Cojocneanu-Petric, Roxana M; Petrut, Bogdan; Braicu, Cornelia; Jurj, Ancuta M; Buiga, Rares; Achimas-Cadariu, Patriciu; Berindan-Neagoe, Ioana
2016-01-01
Background Sample processing is a crucial step for all types of genomic studies. A major challenge for researchers is to understand and predict how RNA quality affects the identification of transcriptional differences (by introducing either false-positive or false-negative errors). Nanotechnologies help improve the quality and quantity control for gene expression studies. Patients and methods The study was performed on 14 tumor and matched normal pairs of tissue from patients with bladder urothelial carcinomas. We assessed the RNA quantity by using the NanoDrop spectrophotometer and the quality by nano-microfluidic capillary electrophoresis technology provided by Agilent 2100 Bioanalyzer. We evaluated the amplification status of three housekeeping genes and one small nuclear RNA gene using the ViiA 7 platform, with specific primers. Results Every step of the sample handling protocol, which begins with sample harvest and ends with the data analysis, is of utmost importance due to the fact that it is time consuming, labor intensive, and highly expensive. High temperature of the surgical procedure does not affect the small nucleic acid sequences in comparison with the mRNA. Conclusion Gene expression is clearly affected by the RNA quality, but less affected in the case of small nuclear RNAs. We proved that the high-temperature, highly invasive transurethral resection of bladder tumor procedure damages the tissue and affects the integrity of the RNA from biological specimens. PMID:27330317
Warren, Barbour S; Maley, Mary; Sugarwala, Laura J; Wells, Martin T; Devine, Carol M
2010-01-01
Small Steps Are Easier Together (SmStep) was a locally-instituted, ecologically based intervention to increase walking by women. Participants were recruited from 10 worksites in rural New York State in collaboration with worksite leaders and Cooperative Extension educators. Worksite leaders were oriented and chose site specific strategies. Participants used pedometers and personalized daily and weekly step goals. Participants reported steps on web logs and received weekly e-mail reports over 10 weeks in the spring of 2008. Of 188 enrollees, 114 (61%) reported steps. Weekly goals were met by 53% of reporters. Intention to treat analysis revealed a mean increase of 1503 daily steps. Movement to a higher step zone over their baseline zone was found for: 52% of the sedentary (n=80); 29% of the low active (n=65); 13% of the somewhat active (n=28); and 18% of the active participants (n=10). This placed 36% of enrollees at the somewhat active or higher zones (23% at baseline, p<0.005). Workers increased walking steps through a goal-based intervention in rural worksites. The SmStep intervention provides a model for a group-based, locally determined, ecological strategy to increase worksite walking supported by local community educators and remote messaging using email and a web site. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Guan, Fada; Peeler, Christopher; Bronk, Lawrence; Geng, Changran; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Grosshans, David; Mohan, Radhe; Titt, Uwe
2015-01-01
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LETt and dose-averaged LET, LETd) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LETt and LETd of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LETt but significant for LETd. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LETd calculation results in the dose plateau region for small step limits. The erroneous LETd results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in geant 4. The incorrect LETd values lead to substantial differences in the calculated RBE. Conclusions: When the geant 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LETt in the dose plateau region and LETd around the Bragg peak. For a large step limit, i.e., 500 μm, LETd is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LETd and LETt becomes positive. PMID:26520716
Old foes, new understandings: nuclear entry of small non-enveloped DNA viruses.
Fay, Nikta; Panté, Nelly
2015-06-01
The nuclear import of viral genomes is an important step of the infectious cycle for viruses that replicate in the nucleus of their host cells. Although most viruses use the cellular nuclear import machinery or some components of this machinery, others have developed sophisticated ways to reach the nucleus. Some of these have been known for some time; however, recent studies have changed our understanding of how some non-enveloped DNA viruses access the nucleus. For example, parvoviruses enter the nucleus through small disruptions of the nuclear membranes and nuclear lamina, and adenovirus tugs at the nuclear pore complex, using kinesin-1, to disassemble their capsids and deliver viral proteins and genomes into the nucleus. Here we review recent findings of the nuclear import strategies of three small non-enveloped DNA viruses, including adenovirus, parvovirus, and the polyomavirus simian virus 40. Copyright © 2015 Elsevier B.V. All rights reserved.
Miniaturized GPS Tags Identify Non-breeding Territories of a Small Breeding Migratory Songbird.
Hallworth, Michael T; Marra, Peter P
2015-06-09
For the first time, we use a small archival global positioning system (GPS) tag to identify and characterize non-breeding territories, quantify migratory connectivity, and identify population boundaries of Ovenbirds (Seiurus aurocapilla), a small migratory songbird, captured at two widely separated breeding locations. We recovered 15 (31%) GPS tags with data and located the non-breeding territories of breeding Ovenbirds from Maryland and New Hampshire, USA (0.50 ± 0.15 ha, mean ± SE). All non-breeding territories had similar environmental attributes despite being distributed across parts of Florida, Cuba and Hispaniola. New Hampshire and Maryland breeding populations had non-overlapping non-breeding population boundaries that encompassed 114,803 and 169,233 km(2), respectively. Archival GPS tags provided unprecedented pinpoint locations and associated environmental information of tropical non-breeding territories. This technology is an important step forward in understanding seasonal interactions and ultimately population dynamics of populations throughout the annual cycle.
Future aircraft networks and schedules
NASA Astrophysics Data System (ADS)
Shu, Yan
2011-07-01
Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents computational results of these large-scale instances. To validate the models and solution algorithms developed, this thesis also compares the daily flight schedules that it designs with the schedules of the existing airlines. Furthermore, it creates instances that represent different economic and fuel-prices conditions and derives schedules under these different conditions. In addition, it discusses the implication of using new aircraft in the future flight schedules. Finally, future research in three areas---model, computational method, and simulation for validation---is proposed.
Fluid transport properties by equilibrium molecular dynamics. I. Methodology at extreme fluid states
NASA Astrophysics Data System (ADS)
Dysthe, D. K.; Fuchs, A. H.; Rousseau, B.
1999-02-01
The Green-Kubo formalism for evaluating transport coefficients by molecular dynamics has been applied to flexible, multicenter models of linear and branched alkanes in the gas phase and in the liquid phase from ambient conditions to close to the triple point. The effects of integration time step, potential cutoff and system size have been studied and shown to be small compared to the computational precision except for diffusion in gaseous n-butane. The RATTLE algorithm is shown to give accurate transport coefficients for time steps up to a limit of 8 fs. The different relaxation mechanisms in the fluids have been studied and it is shown that the longest relaxation time of the system governs the statistical precision of the results. By measuring the longest relaxation time of a system one can obtain a reliable error estimate from a single trajectory. The accuracy of the Green-Kubo method is shown to be as good as the precision for all states and models used in this study even when the system relaxation time becomes very long. The efficiency of the method is shown to be comparable to nonequilibrium methods. The transport coefficients for two recently proposed potential models are presented, showing deviations from experiment of 0%-66%.
Individual-based modelling of population growth and diffusion in discrete time.
Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone
2017-01-01
Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.
A small step for science, a big one for commerce.
Birkett, Liam
2005-01-01
The excellent work that is being performed in medical science advances is to be admired and applauded. In each case the quest is for perfection and to bring the task in hand to its final solution. Along the way there are milestones being passed that may be overlooked, as to their particular merits, because the eyes are focused all the time on the ultimate goal. The conference highlights so many areas of interest and endeavour some of which parallel, duplicate, overlap and/or compliment others.
2001-09-01
coefficient and propulsive efficiency showed that these parameters are virtually the same for both TE conditions (cT 0 40 and η 0 21). As a conclusion...difference in the way the two codes work, they yielded virtually the same solution. This shows that, for a reasonably small time step, whether the boundary... Biblioteca Sao Jose dos Campos - SP - Brazil iab@bibl.ita.cta.br 5. Prof. Max F. Platzer Chair, Department of Aeronautics & Astronautics - Naval
Reinhart, F.; Huber, A.; Thiele, R.; Unden, G.
2010-01-01
The sensor kinase NreB from Staphylococcus carnosus contains an O2-sensitive [4Fe-4S]2+ cluster which is converted by O2 to a [2Fe-2S]2+ cluster, followed by complete degradation and formation of Fe-S-less apo-NreB. NreB·[2Fe-2S]2+ and apoNreB are devoid of kinase activity. NreB contains four Cys residues which ligate the Fe-S clusters. The accessibility of the Cys residues to alkylating agents was tested and used to differentiate Fe-S-containing and Fe-S-less NreB. In a two-step labeling procedure, accessible Cys residues in the native protein were first labeled by iodoacetate. In the second step, Cys residues not labeled in the first step were alkylated with the fluorescent monobromobimane (mBBr) after denaturing of the protein. In purified (aerobic) apoNreB, most (96%) of the Cys residues were alkylated in the first step, but in anaerobic (Fe-S-containing) NreB only a small portion (23%) were alkylated. In anaerobic bacteria, a very small portion of the Cys residues of NreB (9%) were accessible to alkylation in the native state, whereas most (89%) of the Cys residues from aerobic bacteria were accessible. The change in accessibility allowed determination of the half-time (6 min) for the conversion of NreB·[4Fe-4S]2+ to apoNreB after the addition of air in vitro. Overall, in anaerobic bacteria most of the NreB exists as NreB·[4Fe-4S]2+, whereas in aerobic bacteria the (Fe-S-less) apoNreB is predominant and represents the physiological form. The number of accessible Cys residues was also determined by iodoacetate alkylation followed by mass spectrometry of Cys-containing peptides. The pattern of mass increases confirmed the results from the two-step labeling experiments. PMID:19854899
Differential exocytosis from human endothelial cells evoked by high intracellular Ca2+ concentration
Zupančič, G; Ogden, D; Magnus, C J; Wheeler-Jones, C; Carter, T D
2002-01-01
Endothelial cells secrete a range of procoagulant, anticoagulant and inflammatory proteins by exocytosis to regulate blood clotting and local immune responses. The mechanisms regulating vesicular exocytosis were studied in human umbilical vein endothelial cells (HUVEC) with high-resolution membrane capacitance (Cm) measurements. The total whole-cell Cm and the amplitudes and times of discrete femtoFarad (fF)-sized Cm steps due to exocytosis and endocytosis were monitored simultaneously. Intracellular calcium concentration [Ca2+]i was elevated by intracellular photolysis of calcium-DM-nitrophen to evoke secretion and monitored with the low-affinity Ca2+ indicator furaptra. Sustained elevation of [Ca2+]i to > 20 μm evoked large, slow increases in Cm of up to 5 pF in 1-2 min. Exocytotic and endocytotic steps of amplitude 0.5-110 fF were resolved, and accounted on average for ≈33 % of the total Cm change. A prominent component of Cm steps of 2.5-9.0 fF was seen and could be attributed to exocytosis of von-Willebrand-factor-containing Weibel-Palade bodies (WPb), based on the near-identical distributions of capacitance step amplitudes, with calculated estimates of WPb capacitance from morphometry, and on the absence of 2.5-9.0 fF Cm steps in cells deficient in WPb. WPb secretion was delayed on average by 23 s after [Ca2+]i elevation, whereas total Cm increased immediately due to the secretion of small, non-WPb granules. The results show that following a large increase of [Ca2+]i, corresponding to strong stimulation, small vesicular components are immediately available for secretion, whereas the large WPb undergo exocytosis only after a delay. The presence of events of magnitude 9-110 fF also provides evidence of compound secretion of WPb due to prior fusion of individual granules. PMID:12411520
Cengiz, Ibrahim Fatih; Oliveira, Joaquim Miguel; Reis, Rui L
2017-08-01
Quantitative assessment of micro-structure of materials is of key importance in many fields including tissue engineering, biology, and dentistry. Micro-computed tomography (µ-CT) is an intensively used non-destructive technique. However, the acquisition parameters such as pixel size and rotation step may have significant effects on the obtained results. In this study, a set of tissue engineering scaffolds including examples of natural and synthetic polymers, and ceramics were analyzed. We comprehensively compared the quantitative results of µ-CT characterization using 15 acquisition scenarios that differ in the combination of the pixel size and rotation step. The results showed that the acquisition parameters could statistically significantly affect the quantified mean porosity, mean pore size, and mean wall thickness of the scaffolds. The effects are also practically important since the differences can be as high as 24% regarding the mean porosity in average, and 19.5 h and 166 GB regarding the characterization time and data storage per sample with a relatively small volume. This study showed in a quantitative manner the effects of such a wide range of acquisition scenarios on the final data, as well as the characterization time and data storage per sample. Herein, a clear picture of the effects of the pixel size and rotation step on the results is provided which can notably be useful to refine the practice of µ-CT characterization of scaffolds and economize the related resources.
NASA Astrophysics Data System (ADS)
Biennier, Ludovic; Bourgalais, Jeremy; Benidar, Abdessamad; Le Picard, Sebastien
2016-06-01
Hydrocarbons formed in Titan's cold atmosphere, starting with ethane C2H6, ethylene C2H4, acetylene C2H2, propane C3H8,... up to benzene C6H6, play some role in aerosol production, cloud processes, rain generation and Titan's lakes formation. We have started to study in the laboratory the kinetics of the first steps of condensation of these hydrocarbons. Rate coefficients are very sensitive to the description of the potential interaction surfaces of the molecules involved. Combined theoretical and experimental studies at the molecular level of the homogenous nucleation of various small molecules should improve greatly our fundamental understanding. This knowledge will serve as a model for studying more complex nucleation processes actually taking places in planetary atmospheres. Here we present the first experimental kinetic study of the dimerization of two small hydrocarbons: ethane C2H6 and propane C3H8. We have performed experiments to identify the temperature and partial densities ranges over which small hydrocarbon clusters form in saturated uniform supersonic flows. Using our unique reactor based on a Laval nozzle expansions, the kinetics of the formation has also been investigated down to 23 K. The chemical species present in the reactor are probed by a time of flight mass spectrometer equipped with an electron gun for soft ionization of the neutral reagents and products. This work aims at putting some constraints on the role of small hydrocarbon condensation in the formation of haze particles in the dense atmosphere of Titan.
Abstract Interpreters for Free
NASA Astrophysics Data System (ADS)
Might, Matthew
In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dustin Popp; Zander Mausolff; Sedat Goluoglu
We are proposing to use the code, TDKENO, to model TREAT. TDKENO solves the time dependent, three dimensional Boltzmann transport equation with explicit representation of delayed neutrons. Instead of directly integrating this equation, the neutron flux is factored into two components – a rapidly varying amplitude equation and a slowly varying shape equation and each is solved separately on different time scales. The shape equation is solved using the 3D Monte Carlo transport code KENO, from Oak Ridge National Laboratory’s SCALE code package. Using the Monte Carlo method to solve the shape equation is still computationally intensive, but the operationmore » is only performed when needed. The amplitude equation is solved deterministically and frequently, so the solution gives an accurate time-dependent solution without having to repeatedly We have modified TDKENO to incorporate KENO-VI so that we may accurately represent the geometries within TREAT. This paper explains the motivation behind using generalized geometry, and provides the results of our modifications. TDKENO uses the Improved Quasi-Static method to accomplish this. In this method, the neutron flux is factored into two components. One component is a purely time-dependent and rapidly varying amplitude function, which is solved deterministically and very frequently (small time steps). The other is a slowly varying flux shape function that weakly depends on time and is only solved when needed (significantly larger time steps).« less
Real-Time Imaging System for the OpenPET
NASA Astrophysics Data System (ADS)
Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga
2012-02-01
The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.
[An EMD based time-frequency distribution and its application in EEG analysis].
Li, Xiaobing; Chu, Meng; Qiu, Tianshuang; Bao, Haiping
2007-10-01
Hilbert-Huang transform (HHT) is a new time-frequency analytic method to analyze the nonlinear and the non-stationary signals. The key step of this method is the empirical mode decomposition (EMD), with which any complicated signal can be decomposed into a finite and small number of intrinsic mode functions (IMF). In this paper, a new EMD based method for suppressing the cross-term of Wigner-Ville distribution (WVD) is developed and is applied to analyze the epileptic EEG signals. The simulation data and analysis results show that the new method suppresses the cross-term of the WVD effectively with an excellent resolution.
An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback.
Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X; Tsao, Tsu-Chin
2015-08-01
This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.
Serial femtosecond X-ray diffraction of enveloped virus microcrystals
Lawrence, Robert M.; Conrad, Chelsie E.; Zatsepin, Nadia A.; ...
2015-08-20
Serial femtosecond crystallography (SFX) using X-ray free-electron lasers has produced high-resolution, room temperature, time-resolved protein structures. We report preliminary SFX of Sindbis virus, an enveloped icosahedral RNA virus with ~700 Å diameter. Microcrystals delivered in viscous agarose medium diffracted to ~40 Å resolution. Small-angle diffuse X-ray scattering overlaid Bragg peaks and analysis suggests this results from molecular transforms of individual particles. Viral proteins undergo structural changes during entry and infection, which could, in principle, be studied with SFX. This is a pertinent step toward determining room temperature structures from virus microcrystals that may enable time-resolved studies of enveloped viruses.
Transient state kinetic investigation of ferritin iron release
NASA Astrophysics Data System (ADS)
Ciasca, G.; Papi, M.; Chiarpotto, M.; Rodio, M.; Campi, G.; Rossi, C.; De Sole, P.; Bianconi, A.
2012-02-01
Increased iron concentration in tissues appears to be a factor in the genesis and development of inflammatory and degenerative diseases. By means of real-time small angle x-ray scattering measurements, we studied the kinetics of iron release from the ferritin inorganic core as a function of time and distance from the iron core centre. Accordingly, the iron release process follows a three step model: (i) a defect nucleation in the outer part of the mineral core, (ii) the diffusion of the reducing agent towards the inner part of the core, and (iii) the erosion of the core from the inner to the outer part.
Familial resemblance and shared latent familial variance in recurrent fall risk in older women
Cauley, Jane A.; Roth, Stephen M.; Kammerer, Candace; Stone, Katie; Hillier, Teresa A.; Ensrud, Kristine E.; Hochberg, Marc; Nevitt, Michael C.; Zmuda, Joseph M.
2010-01-01
Background: A possible familial component to fracture risk may be mediated through a genetic liability to fall recurrently. Methods: Our analysis sample included 186 female sibling-ships (n = 401) of mean age 71.9 yr (SD = 5.0). Using variance component models, we estimated residual upper-limit heritabilities in fall-risk mobility phenotypes (e.g., chair-stand time, rapid step-ups, and usual-paced walking speed) and in recurrent falls. We also estimated familial and environmental (unmeasured) correlations between pairs of fall-risk mobility phenotypes. All models were adjusted for age, height, body mass index, and medical and environmental factors. Results: Residual upper-limit heritabilities were all moderate (P < 0.05), ranging from 0.27 for usual-paced walking speed to 0.58 for recurrent falls. A strong familial correlation between usual-paced walking speed and rapid step-ups of 0.65 (P < 0.01) was identified. Familial correlations between usual-paced walking speed and chair-stand time (−0.02) and between chair-stand time and rapid step-ups (−0.27) were both nonsignificant (P > 0.05). Environmental correlations ranged from 0.35 to 0.58 (absolute values), P < 0.05 for all. Conclusions: There exists moderate familial resemblance in fall-risk mobility phenotypes and recurrent falls among older female siblings, which we expect is primarily genetic given that adult siblings live separate lives. All fall-risk mobility phenotypes may be coinfluenced at least to a small degree by shared latent familial or environmental factors; however, up to approximately one-half of the covariation between usual-paced walking speed and rapid step-ups may be due to a common set of genes. PMID:20167680
Cuddy, Monica M; Winward, Marcia L; Johnston, Mary M; Lipner, Rebecca S; Clauser, Brian E
2016-01-01
To add to the small body of validity research addressing whether scores from performance assessments of clinical skills are related to performance in supervised patient settings, the authors examined relationships between United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) data gathering and data interpretation scores and subsequent performance in history taking and physical examination in internal medicine residency training. The sample included 6,306 examinees from 238 internal medicine residency programs who completed Step 2 CS for the first time in 2005 and whose performance ratings from their first year of residency training were available. Hierarchical linear modeling techniques were used to examine the relationships among Step 2 CS data gathering and data interpretation scores and history-taking and physical examination ratings. Step 2 CS data interpretation scores were positively related to both history-taking and physical examination ratings. Step 2 CS data gathering scores were not related to either history-taking or physical examination ratings after other USMLE scores were taken into account. Step 2 CS data interpretation scores provide useful information for predicting subsequent performance in history taking and physical examination in supervised practice and thus provide validity evidence for their intended use as an indication of readiness to enter supervised practice. The results show that there is less evidence to support the usefulness of Step 2 CS data gathering scores. This study provides important information for practitioners interested in Step 2 CS specifically or in performance assessments of medical students' clinical skills more generally.
Huang, Ze-Ning; Huang, Chang-Ming; Zheng, Chao-Hui; Li, Ping; Xie, Jian-Wei; Wang, Jia-Bin; Lin, Jian-Xian; Lu, Jun; Chen, Qi-Yue; Cao, Long-long; Lin, Mi; Tu, Ru-Hong
2016-01-01
Abstract To investigate the learning curve of the application of Huang 3-step maneuver, which was summarized and proposed by our center for the treatment of advanced upper gastric cancer. From April 2012 to March 2013, 130 consecutive patients who underwent a laparoscopic spleen-preserving splenic hilar lymphadenectomy (LSPL) by a single surgeon who performed Huang 3-step maneuver were retrospectively analyzed. The learning curve was analyzed based on the moving average (MA) method and the cumulative sum method (CUSUM). Surgical outcomes, short-term outcomes, and follow-up results before and after learning curve were contrastively analyzed. A stepwise multivariate logistic regression was used for a multivariable analysis to determine the factors that affect the operative time using Huang 3-step maneuver. Based on the CUSUM, the learning curve for Huang 3-step maneuver was divided into phase 1 (cases 1–40) and phase 2 (cases 41–130). The dissection time (DT) (P < 0.001), blood loss (BL) (P < 0.001), and number of vessels injured in phase 2 were significantly less than those in phase 1. There were no significant differences in the clinicopathological characteristics, short-term outcomes, or major postoperative complications between the learning curve phases. Univariate and multivariate analyses revealed that body mass index (BMI), short gastric vessels (SGVs), splenic hilar artery (SpA) type, and learning curve phase were significantly associated with DT. In the entire group, 124 patients were followed for a median time of 23.0 months (range, 3–30 months). There was no significant difference in the survival curve between phases. AUGC patients with a BMI less than 25 kg/m2, a small number of SGVs, and a concentrated type of SpA are ideal candidates for surgeons who are in phase 1 of the learning curve. PMID:27043698
Heat Fluxes and Evaporation Measurements by Multi-Function Heat Pulse Probe: a Laboratory Experiment
NASA Astrophysics Data System (ADS)
Sharma, V.; Ciocca, F.; Hopmans, J. W.; Kamai, T.; Lunati, I.; Parlange, M. B.
2012-04-01
Multi Functional Heat Pulse Probes (MFHPP) are multi-needles probes developed in the last years able to measure temperature, thermal properties such as thermal diffusivity and volumetric heat capacity, from which soil moisture is directly retrieved, and electric conductivity (through a Wenner array). They allow the simultaneous measurement of coupled heat, water and solute transport in porous media, then. The use of only one instrument to estimate different quantities in the same volume and almost at the same time significantly reduces the need to interpolate different measurement types in space and time, increasing the ability to study the interdependencies characterizing the coupled transports, especially of water and heat, and water and solute. A three steps laboratory experiment is realized at EPFL to investigate the effectiveness and reliability of the MFHPP responses in a loamy soil from Conthey, Switzerland. In the first step specific calibration curves of volumetric heat capacity and thermal conductivity as function of known volumetric water content are obtained placing the MFHPP in small samplers filled with the soil homogeneously packed at different saturation degrees. The results are compared with literature values. In the second stage the ability of the MFHPP to measure heat fluxes is tested within a homemade thermally insulated calibration box and results are matched with those by two self-calibrating Heatflux plates (from Huxseflux), placed in the same box. In the last step the MFHPP are used to estimate the cumulative subsurface evaporation inside a small column (30 centimeters height per 8 centimeters inner diameter), placed on a scale, filled with the same loamy soil (homogeneously packed and then saturated) and equipped with a vertical array of four MFHPP inserted close to the surface. The subsurface evaporation is calculated from the difference between the net sensible heat and the net heat storage in the volume scanned by the probes, and the values obtained are matched with the overall evaporation, estimated through the scale in terms of weight loss. A numerical model able to solve the coupled heat-moisture diffusive equations is used to interpolate the obtained measures in the second and third step.
Shanahan, Erin; Irvine, Kathryn M.; Roberts, Dave; Litt, Andrea R.; Legg, Kristin; Daley, Rob; Chambers, Nina
2014-01-01
Whitebark pine (Pinus albicaulis) is a foundation and keystone species in upper subalpine environments of the northern Rocky Mountains that strongly influences the biodiversity and productivity of high-elevation ecosystems (Tomback et al. 2001, Ellison et al. 2005). Throughout its historic range, whitebark pine has decreased significantly as a major component of high-elevation forests. As a result, it is critical to understand the challenges to whitebark pine—not only at the tree and stand level, but also as these factors influence the distribution of whitebark pine across the Greater Yellowstone Ecosystem (GYE). In 2003, the National Park Service (NPS) Greater Yellowstone Inventory & Monitoring Network identified whitebark pine as one of twelve significant natural resource indicators or vital signs to monitor (Jean et al. 2005, Fancy et al. 2009) and initiated a long-term, collaborative monitoring program. Partners in this effort include the U.S. Geological Survey, U.S. Forest Service, and Montana State University with representatives from each comprising the Greater Yellowstone Whitebark Pine Monitoring Working Group. The objectives of the monitoring program are to assess trends in (1) the proportion of live, whitebark pine trees (>1.4-m tall) infected with white pine blister rust (blister rust); (2) to document blister rust infection severity by the occurrence and location of persisting and new infections; (3) to determine mortality of whitebark pine trees and describe potential factors contributing to the death of trees; and (4) to assess the multiple components of the recruitment of understory whitebark pine into the reproductive population. In this report we summarize the past eight years (2004-2011) of whitebark pine status and trend monitoring in the GYE. Our study area encompasses six national forests (NF), two national parks (NP), as well as state and private lands in portions of Wyoming, Montana, and Idaho; this area is collectively described as the GYE here and in other studies. The sampling design is a probabilistic, twostage cluster design with stands of whitebark pine as the primary units and 10x50 m belt transects as the secondary units. Primary sampling units (stands) were selected randomly from a sample frame of approximately 10,770 mapped pure and mixed whitebark pine stands ≥2.0 hectares in the GYE (Dixon 1997, Landenburger 2012). From 2004 through 2007 (monitoring transect establishment or initial time-step), we established 176 permanent belt transects (secondary sampling units=176) in 150 whitebark pine stands and permanently marked approximately 4,740 individual trees >1.4 m tall to monitor long-term changes in blister rust infection and survival rates. Between 2008 and 2011 (revisit time-step), these same 176 transects were surveyed and again all previously tagged trees were observed for changes in blister rust infection and survival status. Objective 1. Using a combined ratio estimator, we estimated the proportion of live trees infected in the GYE in the initial time-step (2004-2007) to be 0.22 (0.031 SE). Following the completion of all surveys in the revisit time-step (2008-2011), we estimated the proportion of live trees infected with white pine blister rust as 0.23 (0.028 SE; Table 2). We detected no significant change in the proportion of trees infected in the GYE between the two time-steps. Objective 2. We documented blister rust canker locations as occurring in the canopy or bole. We compared changes in canker position between the initial time-step (2004-2007) and the revisit time-step (2008-2011) in order to assess changes in infection severity. This analysis included the 3,795 trees tagged during the initial time-step that were located and documented as alive at the end of the revisit time-step. At the end of the revisit time-step, we found 1,217 trees infected with blister rust. This includes the 287 newly tagged trees in the revisit time step of which 14 had documented infections. Of these 1,217 trees, 780 trees were infected with blister rust in both time steps. Trees with only canopy cankers made up approximately 43% (519 trees) of the total number of trees infected with blister rust at the end of the revisit time-step, while trees with only bole cankers comprised 20% (252 trees), and those with both canopy and bole cankers included 37% (446 trees) of the infected sample. A bole infection is considered to be more consequential than a canopy canker, as it compromises not only the overall longevity of the tree, but its functional capacity for reproductive output as well (Kendall and Arno 1990, Campbell and Antos 2000, McDonald and Hoff 2001, Schwandt and Kegley 2004). In addition to infection location, we also documented infection transition between the canopy and bole. Of the 780 live trees that were infected with blister rust in both time-steps, approximately 31% (242) maintained canopy cankers and 36% (281) retained bole infections at the end of the revisit time-step. Infection transition from canopy to bole occurred in 30% (234) of the revisit time-step trees while 3% (23) transitioned from bole to canopy infections during this period. Objective 3. To determine whitebark pine mortality, we resurveyed all belt transects to reassess the life status of permanently tagged trees >1.4 m tall. We compared the total number of live tagged trees recorded during monitoring transect establishment to the total number of resurveyed dead tagged trees recorded during the revisit time-step and identified all potential mortality-influencing conditions (blister rust, mountain pine beetle, fire and other). By the end of the revisit time-step, we observed a total of 975 dead tagged whitebark pine trees; using a ratio estimator, this represents a loss of approximately 20% (SE=4.35%) of the original live tagged tree population (GYWPMWG 2012). Objective 4. To investigate the proportion of live, reproducing tagged trees, we divided the total number of positively identified cone-bearing trees by the total number of live trees in the tagged tree sample at the end of the revisit time-step. To approximate the average density of recruitment trees per stand, trees ≤1.4 m tall were summed by stand (within the 500 m² transect area) and divided by the total number of stands. Reproducing trees made up approximately 24% (996 trees) of the total live tagged population at the end of the revisit time-step. Differentiating between whitebark pine and limber pine seedlings or saplings is problematic given the absence of cones or cone scars. Therefore, understory summaries as presented in this report may include individuals of both species when they are sympatric in a stand. The average density of small trees ≤1.4 m tall was 53 understory trees per 500 m². Raw counts of these understory individuals ranged from 0-635 small trees per belt transect. In addition, a total of 287 trees were added to the tagged tree population by the end of 2011. These newly tagged trees were individuals that upon subsequent revisits had reached a height of >1.4 m tall and subsequently added to the sample. Throughout the past decade in the GYE, monitoring has helped document shifts in whitebark pine forests; whitebark pine stands have been impacted by insect, pathogen, wildland fire, and other disturbance events. Blister rust infection is ubiquitous throughout the ecosystem and infection proportions are variable across the region. And while we have documented mortality of whitebark pine, we have also recorded considerable recruitment. We provide this first step-trend report as a quantifiable baseline for understanding the state of whitebark pine in the GYE. Many aspects of whitebark pine health are highly variable across the range of its distribution in the GYE. Through sustained implementation of the monitoring program, we will continue efforts to document and quantify whitebark pine forest dynamics as they arise under periodic upsurges in insect, pathogen, fire episodes, and climatic events in the GYE. Since its inception, this monitoring program perseveres as one of the only sustained longterm efforts conducted in the GYE with a singular purpose to track the health and status of this prominent keystone species.
Wiehn, Matthias S; Fürniss, Daniel; Bräse, Stefan
2009-01-01
Three small compound biaryl libraries featuring a novel fluorinating cleavage strategy for preparation of a difluoromethyl group were assembled on solid supports. The average reaction yield per step was up to 96% in a synthetic sequence over five to six steps. Key features were Suzuki coupling reactions, transesterification with potassium cyanide and amidation reaction with trimethyl aluminum on solid supports.
Heidenreich, Elvio A; Ferrero, José M; Doblaré, Manuel; Rodríguez, José F
2010-07-01
Many problems in biology and engineering are governed by anisotropic reaction-diffusion equations with a very rapidly varying reaction term. This usually implies the use of very fine meshes and small time steps in order to accurately capture the propagating wave while avoiding the appearance of spurious oscillations in the wave front. This work develops a family of macro finite elements amenable for solving anisotropic reaction-diffusion equations with stiff reactive terms. The developed elements are incorporated on a semi-implicit algorithm based on operator splitting that includes adaptive time stepping for handling the stiff reactive term. A linear system is solved on each time step to update the transmembrane potential, whereas the remaining ordinary differential equations are solved uncoupled. The method allows solving the linear system on a coarser mesh thanks to the static condensation of the internal degrees of freedom (DOF) of the macroelements while maintaining the accuracy of the finer mesh. The method and algorithm have been implemented in parallel. The accuracy of the method has been tested on two- and three-dimensional examples demonstrating excellent behavior when compared to standard linear elements. The better performance and scalability of different macro finite elements against standard finite elements have been demonstrated in the simulation of a human heart and a heterogeneous two-dimensional problem with reentrant activity. Results have shown a reduction of up to four times in computational cost for the macro finite elements with respect to equivalent (same number of DOF) standard linear finite elements as well as good scalability properties.
Next Step for Main Street Credit Availability Act of 2009
Sen. Snowe, Olympia J. [R-ME
2009-08-06
Senate - 08/06/2009 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Framework for Creating a Smart Growth Economic Development Strategy
This step-by-step guide can help small and mid-sized cities, particularly those that have limited population growth, areas of disinvestment, and/or a struggling economy, build a place-based economic development strategy.
CASINO: A Small System Simulator
ERIC Educational Resources Information Center
Christensen, Borge
1978-01-01
This article is a tutorial on writing a simulator--the example used is a casino. The nontechnical, step by step approach is designed to enable even non-programmers to understand the design of such a simulation. (Author)
Sen. Landrieu, Mary L. [D-LA
2012-05-17
Senate - 05/17/2012 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Management. Going-Into-Business Modules for Adult and/or Post Secondary Instruction.
ERIC Educational Resources Information Center
Rice, Fred; And Others
Fifteen modules on small business management are provided in this curriculum guide developed for postsecondary vocational instructors. Module titles are as follow: decision making steps; financing a small business; location of a small business; record systems; the balance sheet and profit and loss statement; purchasing; marketing; sales; cash…
Method of Simulating Flow-Through Area of a Pressure Regulator
NASA Technical Reports Server (NTRS)
Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)
2011-01-01
The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.
Hierarchical Regularity in Multi-Basin Dynamics on Protein Landscapes
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Kostov, Konstatin S.; Komatsuzaki, Tamiki
2004-04-01
We analyze time series of potential energy fluctuations and principal components at several temperatures for two kinds of off-lattice 46-bead models that have two distinctive energy landscapes. The less-frustrated "funnel" energy landscape brings about stronger nonstationary behavior of the potential energy fluctuations at the folding temperature than the other, rather frustrated energy landscape at the collapse temperature. By combining principal component analysis with an embedding nonlinear time-series analysis, it is shown that the fast fluctuations with small amplitudes of 70-80% of the principal components cause the time series to become almost "random" in only 100 simulation steps. However, the stochastic feature of the principal components tends to be suppressed through a wide range of degrees of freedom at the transition temperature.
Optimal Padding for the Two-Dimensional Fast Fourier Transform
NASA Technical Reports Server (NTRS)
Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.
2011-01-01
One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.
Sensitivity to timing and order in human visual cortex.
Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2015-03-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy
2017-06-01
Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.
The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao
2010-10-01
There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.
Application of the θ-method to a telegraphic model of fluid flow in a dual-porosity medium
NASA Astrophysics Data System (ADS)
González-Calderón, Alfredo; Vivas-Cruz, Luis X.; Herrera-Hernández, Erik César
2018-01-01
This work focuses mainly on the study of numerical solutions, which are obtained using the θ-method, of a generalized Warren and Root model that includes a second-order wave-like equation in its formulation. The solutions approximately describe the single-phase hydraulic head in fractures by considering the finite velocity of propagation by means of a Cattaneo-like equation. The corresponding discretized model is obtained by utilizing a non-uniform grid and a non-uniform time step. A simple relationship is proposed to give the time-step distribution. Convergence is analyzed by comparing results from explicit, fully implicit, and Crank-Nicolson schemes with exact solutions: a telegraphic model of fluid flow in a single-porosity reservoir with relaxation dynamics, the Warren and Root model, and our studied model, which is solved with the inverse Laplace transform. We find that the flux and the hydraulic head have spurious oscillations that most often appear in small-time solutions but are attenuated as the solution time progresses. Furthermore, we show that the finite difference method is unable to reproduce the exact flux at time zero. Obtaining results for oilfield production times, which are in the order of months in real units, is only feasible using parallel implicit schemes. In addition, we propose simple parallel algorithms for the memory flux and for the explicit scheme.
Methodology for finding and evaluating safe landing sites on small bodies
NASA Astrophysics Data System (ADS)
Rodgers, Douglas J.; Ernst, Carolyn M.; Barnouin, Olivier S.; Murchie, Scott L.; Chabot, Nancy L.
2016-12-01
Here we develop and demonstrate a three-step strategy for finding a safe landing ellipse for a legged spacecraft on a small body such as an asteroid or planetary satellite. The first step, acquisition of a high-resolution terrain model of a candidate landing region, is simulated using existing statistics on block abundances measured at Phobos, Eros, and Itokawa. The synthetic terrain model is generated by randomly placing hemispheric shaped blocks with the empirically determined size-frequency distribution. The resulting terrain is much rockier than typical lunar or martian landing sites. The second step, locating a landing ellipse with minimal hazards, is demonstrated for an assumed approach to landing that uses Autonomous Landing and Hazard Avoidance Technology. The final step, determination of the probability distribution for orientation of the landed spacecraft, is demonstrated for cases of differing regional slope. The strategy described here is both a prototype for finding a landing site during a flight mission and provides tools for evaluating the design of small-body landers. We show that for bodies with Eros-like block distributions, there may be >99% probability of landing stably at a low tilt without blocks impinging on spacecraft structures so as to pose a survival hazard.
Supersonic burning in separated flow regions
NASA Technical Reports Server (NTRS)
Zumwalt, G. W.
1982-01-01
The trough vortex phenomena is used for combustion of hydrogen in a supersonic air stream. This was done in small sizes suitable for igniters in supersonic combustion ramjets so long as the boundary layer displacement thickness is less than 25% of the trough step height. A simple electric spark, properly positioned, ignites the hydrogen in the trough corner. The resulting flame is self sustaining and reignitable. Hydrogen can be injected at the base wall or immediately upstream of the trough. The hydrogen is introduced at low velocity to permit it to be drawn into the corner vortex system and thus experience a long residence time in the combustion region. The igniters can be placed on a skewed back step for angles at least up to 30 deg. without affecting the igniter performance significantly. Certain metals (platinum, copper) act catalytically to improve ignition.
Abdulrazzaq, Bilal I.; Ibrahim, Omar J.; Kawahito, Shoji; Sidek, Roslina M.; Shafie, Suhaidi; Yunus, Nurul Amziah Md.; Lee, Lini; Halin, Izhal Abdul
2016-01-01
A Delay-Locked Loop (DLL) with a modified charge pump circuit is proposed for generating high-resolution linear delay steps with sub-picosecond jitter performance and adjustable delay range. The small-signal model of the modified charge pump circuit is analyzed to bring forth the relationship between the DLL’s internal control voltage and output time delay. Circuit post-layout simulation shows that a 0.97 ps delay step within a 69 ps delay range with 0.26 ps Root-Mean Square (RMS) jitter performance is achievable using a standard 0.13 µm Complementary Metal-Oxide Semiconductor (CMOS) process. The post-layout simulation results show that the power consumption of the proposed DLL architecture’s circuit is 0.1 mW when the DLL is operated at 2 GHz. PMID:27690040
Conveying population education through games.
1987-01-01
Games are extremely useful for conveying population education messages because they are entertaining, because they involve the players in the learning situation, and because, by compressing space and time, they enable the players to perceive the effects of future events on their own lives. One teaching game, called "Futures Wheel," enables the players to move step by step from an abstract real-world situation to its impact on their own lives. Another game, called "Card Game on Family Welfare," is played by 4 players using cards illustrating such things as preparation for marriage, planned families, small families, responsible parenthood, and women's roles. 24 of the 25 cards dealt to the players have matching pictures on a base sheet. The player who loses, i.e., cannot find a match for his last card, is holding the card which displays an unhappy big family.
NASA Astrophysics Data System (ADS)
Galiatsatos, P. G.; Tennyson, J.
2012-11-01
The most time consuming step within the framework of the UK R-matrix molecular codes is that of the diagonalization of the inner region Hamiltonian matrix (IRHM). Here we present the method that we follow to speed up this step. We use shared memory machines (SMM), distributed memory machines (DMM), the OpenMP directive based parallel language, the MPI function based parallel language, the sparse matrix diagonalizers ARPACK and PARPACK, a variation for real symmetric matrices of the official coordinate sparse matrix format and finally a parallel sparse matrix-vector product (PSMV). The efficient application of the previous techniques rely on two important facts: the sparsity of the matrix is large enough (more than 98%) and in order to get back converged results we need a small only part of the matrix spectrum.
Methods, systems and devices for detecting and locating ferromagnetic objects
Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID
2010-01-26
Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.
Four decades of implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B.
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Molecular simulation of small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2012-11-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
Fast parallel algorithms that compute transitive closure of a fuzzy relation
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.
1993-01-01
The notion of a transitive closure of a fuzzy relation is very useful for clustering in pattern recognition, for fuzzy databases, etc. The original algorithm proposed by L. Zadeh (1971) requires the computation time O(n(sup 4)), where n is the number of elements in the relation. In 1974, J. C. Dunn proposed a O(n(sup 2)) algorithm. Since we must compute n(n-1)/2 different values s(a, b) (a not equal to b) that represent the fuzzy relation, and we need at least one computational step to compute each of these values, we cannot compute all of them in less than O(n(sup 2)) steps. So, Dunn's algorithm is in this sense optimal. For small n, it is ok. However, for big n (e.g., for big databases), it is still a lot, so it would be desirable to decrease the computation time (this problem was formulated by J. Bezdek). Since this decrease cannot be done on a sequential computer, the only way to do it is to use a computer with several processors working in parallel. We show that on a parallel computer, transitive closure can be computed in time O((log(sub 2)(n))2).
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-02-23
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
One-step selection of Vaccinia virus-binding DNA aptamers by MonoLEX
Nitsche, Andreas; Kurth, Andreas; Dunkhorst, Anna; Pänke, Oliver; Sielaff, Hendrik; Junge, Wolfgang; Muth, Doreen; Scheller, Frieder; Stöcklein, Walter; Dahmen, Claudia; Pauli, Georg; Kage, Andreas
2007-01-01
Background As a new class of therapeutic and diagnostic reagents, more than fifteen years ago RNA and DNA aptamers were identified as binding molecules to numerous small compounds, proteins and rarely even to complete pathogen particles. Most aptamers were isolated from complex libraries of synthetic nucleic acids by a process termed SELEX based on several selection and amplification steps. Here we report the application of a new one-step selection method (MonoLEX) to acquire high-affinity DNA aptamers binding Vaccinia virus used as a model organism for complex target structures. Results The selection against complete Vaccinia virus particles resulted in a 64-base DNA aptamer specifically binding to orthopoxviruses as validated by dot blot analysis, Surface Plasmon Resonance, Fluorescence Correlation Spectroscopy and real-time PCR, following an aptamer blotting assay. The same oligonucleotide showed the ability to inhibit in vitro infection of Vaccinia virus and other orthopoxviruses in a concentration-dependent manner. Conclusion The MonoLEX method is a straightforward procedure as demonstrated here for the identification of a high-affinity DNA aptamer binding Vaccinia virus. MonoLEX comprises a single affinity chromatography step, followed by subsequent physical segmentation of the affinity resin and a single final PCR amplification step of bound aptamers. Therefore, this procedure improves the selection of high affinity aptamers by reducing the competition between aptamers of different affinities during the PCR step, indicating an advantage for the single-round MonoLEX method. PMID:17697378
Ouertani, Rachid; Hamdi, Abderrahmen; Amri, Chohdi; Khalifa, Marouan; Ezzaouia, Hatem
2014-01-01
In this work, we use a two-step metal-assisted chemical etching method to produce films of silicon nanowires shaped in micrograins from metallurgical-grade polycrystalline silicon powder. The first step is an electroless plating process where the powder was dipped for few minutes in an aqueous solution of silver nitrite and hydrofluoric acid to permit Ag plating of the Si micrograins. During the second step, corresponding to silicon dissolution, we add a small quantity of hydrogen peroxide to the plating solution and we leave the samples to be etched for three various duration (30, 60, and 90 min). We try elucidating the mechanisms leading to the formation of silver clusters and silicon nanowires obtained at the end of the silver plating step and the silver-assisted silicon dissolution step, respectively. Scanning electron microscopy (SEM) micrographs revealed that the processed Si micrograins were covered with densely packed films of self-organized silicon nanowires. Some of these nanowires stand vertically, and some others tilt to the silicon micrograin facets. The thickness of the nanowire films increases from 0.2 to 10 μm with increasing etching time. Based on SEM characterizations, laser scattering estimations, X-ray diffraction (XRD) patterns, and Raman spectroscopy, we present a correlative study dealing with the effect of the silver-assisted etching process on the morphological and structural properties of the processed silicon nanowire films.
2014-01-01
In this work, we use a two-step metal-assisted chemical etching method to produce films of silicon nanowires shaped in micrograins from metallurgical-grade polycrystalline silicon powder. The first step is an electroless plating process where the powder was dipped for few minutes in an aqueous solution of silver nitrite and hydrofluoric acid to permit Ag plating of the Si micrograins. During the second step, corresponding to silicon dissolution, we add a small quantity of hydrogen peroxide to the plating solution and we leave the samples to be etched for three various duration (30, 60, and 90 min). We try elucidating the mechanisms leading to the formation of silver clusters and silicon nanowires obtained at the end of the silver plating step and the silver-assisted silicon dissolution step, respectively. Scanning electron microscopy (SEM) micrographs revealed that the processed Si micrograins were covered with densely packed films of self-organized silicon nanowires. Some of these nanowires stand vertically, and some others tilt to the silicon micrograin facets. The thickness of the nanowire films increases from 0.2 to 10 μm with increasing etching time. Based on SEM characterizations, laser scattering estimations, X-ray diffraction (XRD) patterns, and Raman spectroscopy, we present a correlative study dealing with the effect of the silver-assisted etching process on the morphological and structural properties of the processed silicon nanowire films. PMID:25349554
STEPS at CSUN: Increasing Retention of Engineering and Physical Science Majors
NASA Astrophysics Data System (ADS)
Pedone, V. A.; Cadavid, A. C.; Horn, W.
2012-12-01
STEPS at CSUN seeks to increase the retention rate of first-time freshman in engineering, math, and physical science (STEM) majors from ~55% to 65%. About 40% of STEM first-time freshmen start in College Algebra because they do not take or do not pass the Mathematics Placement Test (MPT). This lengthens time to graduation, which contributes to dissatisfaction with major. STEPS at CSUN has made substantial changes to the administration of the MPT. Initial data show increases in the number of students who take the test and who place out of College Algebra, as well as increases in overall scores. STEPS at CSUN also funded the development of supplemental labs for Trigonometry and Calculus I and II, in partnership with similar labs created by the Math Department for College Algebra and Precalculus. These labs are open to all students, but are mandatory for at-risk students who have low scores on the MPT, low grades in the prerequisite course, or who failed the class the first time. Initial results are promising. Comparison of the grades of 46 Fall 2010 "at-risk" students without lab to those of 36 Fall 2011 students who enrolled in the supplementary lab show D-F grades decreased by 10% and A-B grades increased by 27%. A final retention strategy is aimed at students in the early stages of their majors. At CSUN the greatest loss of STEM majors occurs between sophomore-level and junior-level coursework because course difficulty increases and aspirations to potential careers weaken. The Summer Interdisciplinary Team Experience (SITE) is an intensive 3-week-long summer program that engages small teams of students from diverse STEM majors in faculty-mentored, team-based problem solving. This experience simulates professional work and creates strong bonds between students and between students and faculty mentors. The first two cohorts of students who have participated in SITE indicate that this experience has positively impacted their motivation to complete their STEM degree.
Numerical simulation of the kinetic effects in the solar wind
NASA Astrophysics Data System (ADS)
Sokolov, I.; Toth, G.; Gombosi, T. I.
2017-12-01
Global numerical simulations of the solar wind are usually based on the ideal or resistive MagnetoHydroDynamics (MHD) equations. Within a framework of MHD the electric field is assumed to vanish in the co-moving frame of reference (ideal MHD) or to obey a simple and non-physical scalar Ohm's law (resistive MHD). The Maxwellian distribution functions are assumed, the electron and ion temperatures may be different. Non-disversive MHD waves can be present in this numerical model. The averaged equations for MHD turbulence may be included as well as the energy and momentum exchange between the turbulent and regular motion. With the use of explicit numerical scheme, the time step is controlled by the MHD wave propagtion time across the numerical cell (the CFL condition) More refined approach includes the Hall effect vie the generalized Ohm's law. The Lorentz force acting on light electrons is assumed to vanish, which gives the expression for local electric field in terms of the total electric current, the ion current as well as the electron pressure gradient and magnetic field. The waves (whistlers, ion-cyclotron waves etc) aquire dispersion and the short-wavelength perturbations propagate with elevated speed thus strengthening the CFL condition. If the grid size is sufficiently small to resolve ion skindepth scale, then the timestep is much shorter than the ion gyration period. The next natural step is to use hybrid code to resolve the ion kinetic effects. The hybrid numerical scheme employs the same generalized Ohm's law as Hall MHD and suffers from the same constraint on the time step while solving evolution of the electromagnetic field. The important distiction, however, is that by sloving particle motion for ions we can achieve more detailed description of the kinetic effect without significant degrade in the computational efficiency, because the time-step is sufficient to resolve the particle gyration. We present the fisrt numerical results from coupled BATS-R-US+ALTOR code as applied to kinetic simulations of the solar wind.
Global phenomena from local rules: Peer-to-peer networks and crystal steps
NASA Astrophysics Data System (ADS)
Finkbiner, Amy
Even simple, deterministic rules can generate interesting behavior in dynamical systems. This dissertation examines some real world systems for which fairly simple, locally defined rules yield useful or interesting properties in the system as a whole. In particular, we study routing in peer-to-peer networks and the motion of crystal steps. Peers can vary by three orders of magnitude in their capacities to process network traffic. This heterogeneity inspires our use of "proportionate load balancing," where each peer provides resources in proportion to its individual capacity. We provide an implementation that employs small, local adjustments to bring the entire network into a global balance. Analytically and through simulations, we demonstrate the effectiveness of proportionate load balancing on two routing methods for de Bruijn graphs, introducing a new "reversed" routing method which performs better than standard forward routing in some cases. The prevalence of peer-to-peer applications prompts companies to locate the hosts participating in these networks. We explore the use of supervised machine learning to identify peer-to-peer hosts, without using application-specific information. We introduce a model for "triples," which exploits information about nearly contemporaneous flows to give a statistical picture of a host's activities. We find that triples, together with measurements of inbound vs. outbound traffic, can capture most of the behavior of peer-to-peer hosts. An understanding of crystal surface evolution is important for the development of modern nanoscale electronic devices. The most commonly studied surface features are steps, which form at low temperatures when the crystal is cut close to a plane of symmetry. Step bunching, when steps arrange into widely separated clusters of tightly packed steps, is one important step phenomenon. We analyze a discrete model for crystal steps, in which the motion of each step depends on the two steps on either side of it. We find an time-dependence term for the motion that does not appear in continuum models, and we determine an explicit dependence on step number.
NASA Astrophysics Data System (ADS)
Bielik, M.; Vozar, J.; Hegedus, E.; Celebration Working Group
2003-04-01
The contribution informs about the preliminary results that relate to the first arrival p-wave seismic tomographic processing of data measured along the profiles CEL01, CEL04, CEL05, CEL06, CEL09 and CEL11. These profiles were measured in a framework of the seismic project called CELEBRATION 2000. Data acquisition and geometric parameters of the processed profiles, tomographic processing’s principle, particular processing steps and program parameters are described. Characteristic data (shot points, geophone points, total length of profiles, for all profiles, sampling, sensors and record lengths) of observation profiles are given. The fast program package developed by C. Zelt was applied for tomographic velocity inversion. This process consists of several steps. First step is a creation of the starting velocity field for which the calculated arrival times are modelled by the method of finite differences. The next step is minimization of differences between the measured and modelled arrival time till the deviation is small. Elimination of equivalency problem by including a priori information in the starting velocity field was done too. A priori information consists of the depth to the pre-Tertiary basement, estimation of its overlying sedimentary velocity from well-logging and or other seismic velocity data, etc. After checking the reciprocal times, pickings were corrected. The final result of the processing is a reliable travel time curve set considering the reciprocal times. We carried out picking of travel time curves, enhancement of signal-to-noise ratio on the seismograms using the program system of PROMAX. Tomographic inversion was carried out by so called 3D/2D procedure taking into account 3D wave propagation. It means that a corridor along the profile, which contains the outlying shot points and geophone points as well was defined and we carried out 3D processing within this corridor. The preliminary results indicate the seismic anomalous zones within the crust and the uppermost part of the upper mantle in the area consists of the Western Carpathians, the North European platform, the Pannonian basin and the Bohemian Massif.
Validation of thigh-based accelerometer estimates of postural allocation in 5-12 year-olds.
van Loo, Christiana M T; Okely, Anthony D; Batterham, Marijka J; Hinkley, Trina; Ekelund, Ulf; Brage, Søren; Reilly, John J; Jones, Rachel A; Janssen, Xanne; Cliff, Dylan P
2017-03-01
To validate activPAL3™ (AP3) for classifying postural allocation, estimating time spent in postures and examining the number of breaks in sedentary behaviour (SB) in 5-12 year-olds. Laboratory-based validation study. Fifty-seven children completed 15 sedentary, light- and moderate-to-vigorous intensity activities. Direct observation (DO) was used as the criterion measure. The accuracy of AP3 was examined using a confusion matrix, equivalence testing, Bland-Altman procedures and a paired t-test for 5-8y and 9-12y. Sensitivity of AP3 was 86.8%, 82.5% and 85.3% for sitting/lying, standing, and stepping, respectively, in 5-8y and 95.3%, 81.5% and 85.1%, respectively, in 9-12y. Time estimates of AP3 were equivalent to DO for sitting/lying in 9-12y and stepping in all ages, but not for sitting/lying in 5-12y and standing in all ages. Underestimation of sitting/lying time was smaller in 9-12y (1.4%, limits of agreement [LoA]: -13.8 to 11.1%) compared to 5-8y (12.6%, LoA: -39.8 to 14.7%). Underestimation for stepping time was small (5-8y: 6.5%, LoA: -18.3 to 5.3%; 9-12y: 7.6%, LoA: -16.8 to 1.6%). Considerable overestimation was found for standing (5-8y: 36.8%, LoA: -16.3 to 89.8%; 9-12y: 19.3%, LoA: -1.6 to 36.9%). SB breaks were significantly overestimated (5-8y: 53.2%, 9-12y: 28.3%, p<0.001). AP3 showed acceptable accuracy for classifying postures, however estimates of time spent standing were consistently overestimated and individual error was considerable. Estimates of sitting/lying were more accurate for 9-12y. Stepping time was accurately estimated for all ages. SB breaks were significantly overestimated, although the absolute difference was larger in 5-8y. Surveillance applications of AP3 would be acceptable, however, individual level applications might be less accurate. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Sensitivity of The High-resolution Wam Model With Respect To Time Step
NASA Astrophysics Data System (ADS)
Kasemets, K.; Soomere, T.
The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave heights over the whole Baltic Sea was 2.4 m (1 minute) and 2.04 m (3 minutes), respectively. The most probable reason of such difference is that the WAM model with a relatively large time step poorly describes wave field evolution in the Aland area with extremely ragged bottom topography and coastal line. In earlier studies, it has been reported that the WAM model frequently underestimates wave heights in the northern Baltic Proper by 20-30% in the case of strong north storms (Tuomi et al, Report series of the Finnish Institute of Marine Re- search, 1999). The described results suggest that a part of this underestimation may be removed through a proper choice of the time step.
NASA Astrophysics Data System (ADS)
Nichols, Leannah M.
Commercially pure titanium can take up to six months to successfully manufacture a six-inch in diameter ingot in which can be shipped to be melted and shaped into other useful components. The applications to the corrosion-resistant, light weight, strong metal are endless, yet so is the manufacturing processing time. At a cost of around $80 per pound of certain grades of titanium powder, the everyday consumer cannot afford to use titanium in the many ways it is beneficial simply because the number of processing steps it takes to manufacture consumes too much time, energy, and labor. In this research, the steps it takes from the raw powder form to the final part are proposed to be reduced from 4-8 steps to only 2 steps utilizing a new technology that may even improve upon the titanium properties at the same time as it is reducing the number of steps of manufacture. The two-step procedure involves selecting a cylindrical or rectangular die and punch to compress a small amount of commercially pure titanium to a strong-enough compact for transportation to the friction stir welder to be consolidated. Friction stir welding invented in 1991 in the United Kingdom uses a tool, similar to a drill bit, to approach a sample and gradually plunge into the material at a certain rotation rate of between 100 to 2,100 RPM. In the second step, the friction stir welder is used to process the titanium powder held in a tight holder to consolidate into a harder titanium form. The resulting samples are cut to expose the cross section and then grinded, polished, and cleaned to be observed and tested using scanning electron microscopy (SEM), electron dispersive spectroscopy (EDS), and a Vickers microhardness tester. The results were that the thicker the sample, the harder the resulting consolidated sample peaking at 2 to 3 times harder than that of the original commercially pure titanium in solid form at a peak value of 435.9 hardness and overall average of 251.13 hardness. The combined results of the SEM and EDS have shown that the mixing of the sample holder material, titanium, and tool material were not of a large amount and therefore proves the feasibility of this study. This study should be continued to lessen the labor, energy, and cost of the production of titanium to therefore allow titanium to be improved upon and be more efficient for many applications across many industries.
An L-stable method for solving stiff hydrodynamics
NASA Astrophysics Data System (ADS)
Li, Shengtai
2017-07-01
We develop a new method for simulating the coupled dynamics of gas and multi-species dust grains. The dust grains are treated as pressure-less fluids and their coupling with gas is through stiff drag terms. If an explicit method is used, the numerical time step is subject to the stopping time of the dust particles, which can become extremely small for small grains. The previous semi-implicit method [1] uses second-order trapezoidal rule (TR) on the stiff drag terms and it works only for moderately small size of the dust particles. This is because TR method is only A-stable not L-stable. In this work, we use TR-BDF2 method [2] for the stiff terms in the coupled hydrodynamic equations. The L-stability of TR-BDF2 proves essential in treating a number of dust species. The combination of TR-BDF2 method with the explicit discretization of other hydro terms can solve a wide variety of stiff hydrodynamics equations accurately and efficiently. We have implemented our method in our LA-COMPASS (Los Alamos Computational Astrophysics Suite) package. We have applied the code to simulate some dusty proto-planetary disks and obtained very good match with astronomical observations.
NASA Astrophysics Data System (ADS)
Braud, Isabelle; Fuamba, Musandji; Branger, Flora; Batchabani, Essoyéké; Sanzana, Pedro; Sarrazin, Benoit; Jankowfsky, Sonja
2016-04-01
Distributed hydrological models are used at best when their outputs are compared not only to the outlet discharge, but also to internal observed variables, so that they can be used as powerful hypothesis-testing tools. In this paper, the interest of distributed networks of sensors for evaluating a distributed model and the underlying functioning hypotheses is explored. Two types of data are used: surface soil moisture and water level in streams. The model used in the study is the periurban PUMMA (Peri-Urban Model for landscape Management, Jankowfsky et al., 2014), that is applied to the Mercier catchment (6.7 km2) a semi-rural catchment with 14% imperviousness, located close to Lyon, France where distributed water level (13 locations) and surface soil moisture data (9 locations) are available. Model parameters are specified using in situ information or the results of previous studies, without any calibration and the model is run for four years from January 1st 2007 to December 31st 2010 with a variable time step for rainfall and an hourly time step for reference evapotranspiration. The model evaluation protocol was guided by the available data and how they can be interpreted in terms of hydrological processes and constraints for the model components and parameters. We followed a stepwise approach. The first step was a simple model water balance assessment, without comparison to observed data. It can be interpreted as a basic quality check for the model, ensuring that it conserves mass, makes the difference between dry and wet years, and reacts to rainfall events. The second step was an evaluation against observed discharge data at the outlet, using classical performance criteria. It gives a general picture of the model performance and allows to comparing it to other studies found in the literature. In the next steps (steps 3 to 6), focus was made on more specific hydrological processes. In step 3, distributed surface soil moisture data was used to assess the relevance of the simulated seasonal soil water storage dynamics. In step 4, we evaluated the base flow generation mechanisms in the model through comparison with continuous water level data transformed into stream intermittency statistics. In step 5, the water level data was used again but at the event time scale, to evaluate the fast flow generation components through comparison of modelled and observed reaction and response times. Finally, in step 6, we studied correlation between observed and simulated reaction and response times and various characteristics of the rainfall events (rain volume, intensity) and antecedent soil moisture, to see if the model was able to reproduce the observed features as described in Sarrazin (2012). The results show that the model is able to represent satisfactorily the soil water storage dynamics and stream intermittency. On the other hand, the model does not reproduce the response times and the difference in response between forested and agricultural areas. References: Jankowfsky et al., 2014. Assessing anthropogenic influence on the hydrology of small peri-urban catchments: Development of the object-oriented PUMMA model by integrating urban and rural hydrological models. J. Hydrol., 517, 1056-1071 Sarrazin, B., 2012. MNT et observations multi-locales du réseau hydrographique d'un petit bassin versant rural dans une perspective d'aide à la modélisation hydrologique. Ecole doctorale Terre, Univers, Environnement. l'Institut National Polytechnique de Grenoble, 269 pp (in French).
Parsons, Sean P; Huizinga, Jan D
2018-06-03
What is the central question of this study? What is the nature of slow wave driven contraction frequency gradients in the small intestine? What is the main finding and its importance? Frequency plateaus are composed of discrete waves of increased interval, each wave associated with a contraction dislocation. Smooth frequency gradients are generated by localised neural modulation of wave frequency, leading to functionally important wave turbulence. Both patterns are emergent properties of a network of coupled oscillators, the interstitial cells of Cajal. A gut-wide network of interstitial cells of Cajal (ICC) generate electrical oscillations (slow waves) that orchestrate waves of muscle contraction. In the small intestine there is a gradient in slow wave frequency from high at the duodenum to low at the terminal ileum. Time-averaged measurements of frequency have suggested either a smooth or stepped (plateaued) gradient. We measured individual contraction intervals from diameter maps of the mouse small intestine to create interval maps (IMaps). IMaps showed that each frequency plateau was composed of discrete waves of increased interval. Each interval wave originated at a terminating contraction wave, a "dislocation", at the plateau's proximal boundary. In a model chain of coupled phase oscillators, interval wave frequency increased as coupling decreased or as the natural frequency gradient or noise increased. Injuring the intestine at a proximal point to destroy coupling, suppressed distal steps which then reappeared with gap junction block by carbenoxolone. This lent further support to our previous hypothesis that lines of dislocations were fixed by points of low coupling strength. Dislocations induced by electrical field pulses in the intestine and by equivalent phase shift in the model, were associated with interval waves. When the enteric nervous system was active, IMaps showed a chaotic, turbulent pattern of interval change with no frequency steps or plateaus. This probably resulted from local, stochastic release of neurotransmitters. Plateaus, dislocations, interval waves and wave turbulence arise from a dynamic interplay between natural frequency and coupling in the ICC network. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-24
... Committee on Small and Minority Business (ITAC-11) AGENCY: Office of the United States Trade Representative... Fees. --Access to and use of U.S. Small Business Administration State Trade and Export Promotion (STEP) Grants by San Diego-area business. --Congressional perspective on trade barriers for small and minority...
On the large eddy simulation of turbulent flows in complex geometry
NASA Technical Reports Server (NTRS)
Ghosal, Sandip
1993-01-01
Application of the method of Large Eddy Simulation (LES) to a turbulent flow consists of three separate steps. First, a filtering operation is performed on the Navier-Stokes equations to remove the small spatial scales. The resulting equations that describe the space time evolution of the 'large eddies' contain the subgrid-scale (sgs) stress tensor that describes the effect of the unresolved small scales on the resolved scales. The second step is the replacement of the sgs stress tensor by some expression involving the large scales - this is the problem of 'subgrid-scale modeling'. The final step is the numerical simulation of the resulting 'closed' equations for the large scale fields on a grid small enough to resolve the smallest of the large eddies, but still much larger than the fine scale structures at the Kolmogorov length. In dividing a turbulent flow field into 'large' and 'small' eddies, one presumes that a cut-off length delta can be sensibly chosen such that all fluctuations on a scale larger than delta are 'large eddies' and the remainder constitute the 'small scale' fluctuations. Typically, delta would be a length scale characterizing the smallest structures of interest in the flow. In an inhomogeneous flow, the 'sensible choice' for delta may vary significantly over the flow domain. For example, in a wall bounded turbulent flow, most statistical averages of interest vary much more rapidly with position near the wall than far away from it. Further, there are dynamically important organized structures near the wall on a scale much smaller than the boundary layer thickness. Therefore, the minimum size of eddies that need to be resolved is smaller near the wall. In general, for the LES of inhomogeneous flows, the width of the filtering kernel delta must be considered to be a function of position. If a filtering operation with a nonuniform filter width is performed on the Navier-Stokes equations, one does not in general get the standard large eddy equations. The complication is caused by the fact that a filtering operation with a nonuniform filter width in general does not commute with the operation of differentiation. This is one of the issues that we have looked at in detail as it is basic to any attempt at applying LES to complex geometry flows. Our principal findings are summarized.
Molecular dynamics simulations in hybrid particle-continuum schemes: Pitfalls and caveats
NASA Astrophysics Data System (ADS)
Stalter, S.; Yelash, L.; Emamy, N.; Statt, A.; Hanke, M.; Lukáčová-Medvid'ová, M.; Virnau, P.
2018-03-01
Heterogeneous multiscale methods (HMM) combine molecular accuracy of particle-based simulations with the computational efficiency of continuum descriptions to model flow in soft matter liquids. In these schemes, molecular simulations typically pose a computational bottleneck, which we investigate in detail in this study. We find that it is preferable to simulate many small systems as opposed to a few large systems, and that a choice of a simple isokinetic thermostat is typically sufficient while thermostats such as Lowe-Andersen allow for simulations at elevated viscosity. We discuss suitable choices for time steps and finite-size effects which arise in the limit of very small simulation boxes. We also argue that if colloidal systems are considered as opposed to atomistic systems, the gap between microscopic and macroscopic simulations regarding time and length scales is significantly smaller. We propose a novel reduced-order technique for the coupling to the macroscopic solver, which allows us to approximate a non-linear stress-strain relation efficiently and thus further reduce computational effort of microscopic simulations.
Espie, Colin A
2009-12-01
There is a large body of evidence that Cognitive Behavioral Therapy for insomnia (CBT) is an effective treatment for persistent insomnia. However, despite two decades of research it is still not readily available, and there are no immediate signs that this situation is about to change. This paper proposes that a service delivery model, based on "stepped care" principles, would enable this relatively scarce healthcare expertise to be applied in a cost-effective way to achieve optimal development of CBT services and best clinical care. The research evidence on methods of delivering CBT, and the associated clinical leadership roles, is reviewed. On this basis, self-administered CBT is posited as the "entry level" treatment for stepped care, with manualized, small group, CBT delivered by nurses, at the next level. Overall, a hierarchy comprising five levels of CBT stepped care is suggested. Allocation to a particular level should reflect assessed need, which in turn represents increased resource requirement in terms of time, cost and expertise. Stepped care models must also be capable of "referring" people upstream where there is an incomplete therapeutic response to a lower level intervention. Ultimately, the challenge is for CBT to be delivered competently and effectively in diversified formats on a whole population basis. That is, it needs to become "scalable". This will require a robust approach to clinical governance.
NASA Astrophysics Data System (ADS)
Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu
2003-01-01
This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.
NASA Technical Reports Server (NTRS)
Mjolsness, Eric; Castano, Rebecca; Mann, Tobias; Wold, Barbara
2000-01-01
We provide preliminary evidence that existing algorithms for inferring small-scale gene regulation networks from gene expression data can be adapted to large-scale gene expression data coming from hybridization microarrays. The essential steps are (I) clustering many genes by their expression time-course data into a minimal set of clusters of co-expressed genes, (2) theoretically modeling the various conditions under which the time-courses are measured using a continuous-time analog recurrent neural network for the cluster mean time-courses, (3) fitting such a regulatory model to the cluster mean time courses by simulated annealing with weight decay, and (4) analysing several such fits for commonalities in the circuit parameter sets including the connection matrices. This procedure can be used to assess the adequacy of existing and future gene expression time-course data sets for determining transcriptional regulatory relationships such as coregulation.
A high-fidelity weather time series generator using the Markov Chain process on a piecewise level
NASA Astrophysics Data System (ADS)
Hersvik, K.; Endrerud, O.-E. V.
2017-12-01
A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.
Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert
2016-01-01
State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718
Computational Aerodynamic Modeling of Small Quadcopter Vehicles
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Ventura Diaz, Patricia; Boyd, D. Douglas; Chan, William M.; Theodore, Colin R.
2017-01-01
High-fidelity computational simulations have been performed which focus on rotor-fuselage and rotor-rotor aerodynamic interactions of small quad-rotor vehicle systems. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, low Mach number preconditioning, and hybrid turbulence modeling. Computational results for isolated rotors are shown to compare well with available experimental data. Computational results in hover reveal the differences between a conventional configuration where the rotors are mounted above the fuselage and an unconventional configuration where the rotors are mounted below the fuselage. Complex flow physics in forward flight is investigated. The goal of this work is to demonstrate that understanding of interactional aerodynamics can be an important factor in design decisions regarding rotor and fuselage placement for next-generation multi-rotor drones.
Quantum Simulation of Tunneling in Small Systems
Sornborger, Andrew T.
2012-01-01
A number of quantum algorithms have been performed on small quantum computers; these include Shor's prime factorization algorithm, error correction, Grover's search algorithm and a number of analog and digital quantum simulations. Because of the number of gates and qubits necessary, however, digital quantum particle simulations remain untested. A contributing factor to the system size required is the number of ancillary qubits needed to implement matrix exponentials of the potential operator. Here, we show that a set of tunneling problems may be investigated with no ancillary qubits and a cost of one single-qubit operator per time step for the potential evolution, eliminating at least half of the quantum gates required for the algorithm and more than that in the general case. Such simulations are within reach of current quantum computer architectures. PMID:22916333
Regulation Costs to Small Business Act
Sen. Rubio, Marco [R-FL
2013-03-12
Senate - 03/12/2013 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Women's Small Business Procurement Parity Act
Sen. Shaheen, Jeanne [D-NH
2014-06-17
Senate - 06/17/2014 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Innovation Act of 2013
Sen. Baldwin, Tammy [D-WI
2013-07-11
Senate - 07/11/2013 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Transparency in Small Business Assistance Act
Sen. Heller, Dean [R-NV
2013-05-14
Senate - 05/14/2013 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Goaling Act of 2012
Sen. Cardin, Benjamin L. [D-MD
2012-05-22
Senate - 05/22/2012 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Fairness Act of 2011
Sen. Enzi, Michael B. [R-WY
2011-05-26
Senate - 05/26/2011 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Fairness Act of 2013
Sen. Enzi, Michael B. [R-WY
2013-06-19
Senate - 06/19/2013 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
NASA Astrophysics Data System (ADS)
Bu, Xiangwei; Wu, Xiaoyan; Huang, Jiaqi; Wei, Daozhi
2016-11-01
This paper investigates the design of a novel estimation-free prescribed performance non-affine control strategy for the longitudinal dynamics of an air-breathing hypersonic vehicle (AHV) via back-stepping. The proposed control scheme is capable of guaranteeing tracking errors of velocity, altitude, flight-path angle, pitch angle and pitch rate with prescribed performance. By prescribed performance, we mean that the tracking error is limited to a predefined arbitrarily small residual set, with convergence rate no less than a certain constant, exhibiting maximum overshoot less than a given value. Unlike traditional back-stepping designs, there is no need of an affine model in this paper. Moreover, both the tedious analytic and numerical computations of time derivatives of virtual control laws are completely avoided. In contrast to estimation-based strategies, the presented estimation-free controller possesses much lower computational costs, while successfully eliminating the potential problem of parameter drifting. Owing to its independence on an accurate AHV model, the studied methodology exhibits excellent robustness against system uncertainties. Finally, simulation results from a fully nonlinear model clarify and verify the design.
Small Business Export Opportunity Development Act of 2009
Sen. Snowe, Olympia J. [R-ME
2009-06-08
Senate - 06/08/2009 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Common Application Act of 2012
Sen. Hagan, Kay R. [D-NC
2012-05-16
Senate - 05/16/2012 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Additional Temporary Extension Act of 2011
Sen. Landrieu, Mary L. [D-LA
2011-05-25
Senate - 05/25/2011 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Venture Capital Act of 2009
Sen. Kerry, John F. [D-MA
2009-10-21
Senate - 10/21/2009 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
SCORE for Small Business Act of 2012
Sen. Landrieu, Mary L. [D-LA
2012-08-02
Senate - 08/02/2012 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Increasing Small Business Lending Act of 2011
Sen. Kerry, John F. [D-MA
2011-11-08
Senate - 11/08/2011 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Women's Small Business Ownership Act of 2014
Sen. Cantwell, Maria [D-WA
2014-07-30
Senate - 07/30/2014 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Veterans Small Business Enhancement Act of 2014
Sen. Durbin, Richard J. [D-IL
2014-09-11
Senate - 09/11/2014 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Community Partner Relief Act of 2010
Sen. Landrieu, Mary L. [D-LA
2010-03-25
Senate - 03/25/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
SCORE for Small Business Act of 2014
Sen. Landrieu, Mary L. [D-LA
2014-02-10
Senate - 02/10/2014 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Emergency Loan Relief Act of 2009
Sen. Brown, Sherrod [D-OH
2009-10-20
Senate - 10/20/2009 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Contracting Programs Parity Act of 2009
Sen. Snowe, Olympia J. [R-ME
2009-07-21
Senate - 07/21/2009 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Native Small Business Conformity Act of 2013
Sen. Schatz, Brian [D-HI
2013-10-29
Senate - 10/29/2013 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Disaster Assistance Act of 2011
Sen. Casey, Robert P., Jr. [D-PA
2011-10-13
Senate - 10/13/2011 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Environmental Stewardship Assistance Act of 2010
Sen. Wyden, Ron [D-OR
2010-04-29
Senate - 04/29/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Women's Small Business Ownership Act of 2012
Sen. Snowe, Olympia J. [R-ME
2012-05-17
Senate - 05/17/2012 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Investment and Innovation Act of 2010
Sen. Landrieu, Mary L. [D-LA
2010-11-18
Senate - 11/18/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business International Trade Enhancements Act of 2009
Sen. Landrieu, Mary L. [D-LA
2009-06-08
Senate - 06/08/2009 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Is My Small School Really Just a Business.
ERIC Educational Resources Information Center
Ross, Lee A.; Chance, W. G.
1984-01-01
Discusses steps necessary to running small schools on a more business-like basis, including quality control, product (student) evaluation, production process monitoring, establishment of standards, improved recruitment, market research, and sales promotion. (MH)
Small Business Programs Parity Act of 2010
Sen. Landrieu, Mary L. [D-LA
2010-03-26
Senate - 03/26/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Real-time 3D motion tracking for small animal brain PET
NASA Astrophysics Data System (ADS)
Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.
2008-05-01
High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.
Well-balanced compressible cut-cell simulation of atmospheric flow.
Klein, R; Bates, K R; Nikiforakis, N
2009-11-28
Cut-cell meshes present an attractive alternative to terrain-following coordinates for the representation of topography within atmospheric flow simulations, particularly in regions of steep topographic gradients. In this paper, we present an explicit two-dimensional method for the numerical solution on such meshes of atmospheric flow equations including gravitational sources. This method is fully conservative and allows for time steps determined by the regular grid spacing, avoiding potential stability issues due to arbitrarily small boundary cells. We believe that the scheme is unique in that it is developed within a dimensionally split framework, in which each coordinate direction in the flow is solved independently at each time step. Other notable features of the scheme are: (i) its conceptual and practical simplicity, (ii) its flexibility with regard to the one-dimensional flux approximation scheme employed, and (iii) the well-balancing of the gravitational sources allowing for stable simulation of near-hydrostatic flows. The presented method is applied to a selection of test problems including buoyant bubble rise interacting with geometry and lee-wave generation due to topography.
Splicing-related genes are alternatively spliced upon changes in ambient temperatures in plants
Bucher, Johan; Lammers, Michiel; Busscher-Lange, Jacqueline; Bonnema, Guusje; Rodenburg, Nicole; Proveniers, Marcel C. G.; Angenent, Gerco C.
2017-01-01
Plants adjust their development and architecture to small variations in ambient temperature. In a time in which temperatures are rising world-wide, the mechanism by which plants are able to sense temperature fluctuations and adapt to it, is becoming of special interest. By performing RNA-sequencing on two Arabidopsis accession and one Brassica species exposed to temperature alterations, we showed that alternative splicing is an important mechanism in ambient temperature sensing and adaptation. We found that amongst the differentially alternatively spliced genes, splicing related genes are enriched, suggesting that the splicing machinery itself is targeted for alternative splicing when temperature changes. Moreover, we showed that many different components of the splicing machinery are targeted for ambient temperature regulated alternative splicing. Mutant analysis of a splicing related gene that was differentially spliced in two of the genotypes showed an altered flowering time response to different temperatures. We propose a two-step mechanism where temperature directly influences alternative splicing of the splicing machinery genes, followed by a second step where the altered splicing machinery affects splicing of downstream genes involved in the adaptation to altered temperatures. PMID:28257507
The development of a purification procedure for saxitoxin-induced protein.
Smith, D S; Kitts, D D; Fenske, B; Owen, T G; Shyng, S
1995-02-01
A simple economical procedure for purifying saxitoxin-induced protein (SIP) from crude extracts of the small shore crab, Hemigrapsus oregenesis, was developed. (NH4)2SO4 precipitation, chymotrypsin digestion, heat treatment, gel filtration and ion-exchange-chromatography procedures were evaluated in purifying SIP. An enzyme immunoassay was used to determine the SIP yield and relative purity at each step of three procedures, thus permitting an assessment of the conditions required for maximum recovery. Response surface analysis was used in an attempt to determine the optimum temperature and exposure time for the heat treatment. A 20 min incubation at 65 degrees C was confirmed by electrophoretic analysis to be the best combination of time and temperature for achieving both an acceptable yield and purity of SIP. SIP in desalted concentrate was shown to be resistant to chymotrypsin proteolysis; however, this enzyme had deleterious effects on SIP purification at later stages of the procedure. The omission of the chymotrypsin digestion, and the inclusion of gel-filtration chromatography in the final clean-up step, resulted in the purification of SIP comparable with that achieved with affinity chromatography.
NASA Astrophysics Data System (ADS)
Mandour Eldeeb, Mohamed
The backward facing steps nozzle (BFSN) is a new developed flow adjustable exit area nozzle. It consists of two parts, the first is a base nozzle with small area ratio and the second part is a nozzle extension with surface consists of backward facing steps. The steps number and heights are carefully chosen to produce controlled flow separation at steps edges that adjust the nozzle exit area at all altitudes (pressure ratios). The BFSN performance parameters are assessed numerically in terms of thrust and side loads against the dual-bell nozzle with the same pressure ratios and cross sectional areas. Cold flow inside the planar BFSN and planar DBN are simulated using three-dimensional turbulent Navier-Stoke equations solver at different pressure ratios. The pressure distribution over the upper and the lower nozzles walls show symmetrical flow separation location inside the BFSN and an asymmetrical flow separation location inside the DBN at same vertical plane. The side loads are calculated by integrate the pressure over the nozzles walls at different pressure ratios for both nozzles. Time dependent solution for the DBN and the BFSN are obtained by solving two-dimensional turbulent flow. The side loads over the upper and lower nozzles walls are plotted against the flow time. The BFSN side loads history shows a small values of fluctuated side loads compared with the DBN which shows a high values with high fluctuations. Hot flow 3-D numerical solutions inside the axi-symmetric BFSN and DBN are obtained at different pressure ratios and compared to assess the BFSN performance against the DBN. Pressure distributions over the nozzles walls at different circumferential angels are plotted for both nozzles. The results show that the flow separation location is axi-symmetric inside the BFSN with symmetrical pressure distributions over the nozzle circumference at different pressure ratios. While the DBN results show an asymmetrical flow separation locations over the nozzle circumference at all pressure ratios.The results show that the side loads in the BFSN is 0.01%-0.6% of its value in the DBN for same pressure ratio. For further confirmation of the axi-symmetric nature of the flow in the BFSN, 2-D axi-symmetric solutions are obtained at same pressure ratios and boundary conditions. The flow parameters at the nozzle exit are calculated the 3-D and the 2-D solutions and compared to each other. The maximum difference between the 3-D and the 2-D solutions is less than 1%. Parametric studies are carried out with number of the backward facing steps varied from two to forty. The results show that as the number of backward facing steps increase, the nozzle performance in terms of thrust approach the DBN performance. The BFSN with two and six steps are simulated for pressure ratios range from 148 to 1500 and compared with the DBN and a conventional bell nozzle. Expandable BFSN study is carried out on the BFSN with two steps where the nozzle operation is divided into three modes related to the operating altitude (PR). Backward facing steps concept is applied to a full scale conventional bell nozzle by adding two backward facing steps at the end of the nozzle increasing its expansion area results in 1.8% increasing in its performance in terms of thrust coefficient at high altitudes.
A Bibliographic Guide to Small and Minority Business Management.
ERIC Educational Resources Information Center
Popovich, Charles J.
1979-01-01
Lists publications that help the small and/or minority entrepreneur cope with practical problems and provides access to background and government resources as well as relevant bibliographies and directories. Publications follow the steps one would take establishing a small business: starting, financing, managing, and other requirements (record…
Self-Organization of Vocabularies under Different Interaction Orders.
Vera, Javier
2017-01-01
Traditionally, the formation of vocabularies has been studied by agent-based models (primarily, the naming game) in which random pairs of agents negotiate word-meaning associations at each discrete time step. This article proposes a first approximation to a novel question: To what extent is the negotiation of word-meaning associations influenced by the order in which agents interact? Automata networks provide the adequate mathematical framework to explore this question. Computer simulations suggest that on two-dimensional lattices the typical features of the formation of word-meaning associations are recovered under random schemes that update small fractions of the population at the same time; by contrast, if larger subsets of the population are updated, a periodic behavior may appear.
Basic steps in establishing effective small group teaching sessions in medical schools.
Meo, Sultan Ayoub
2013-07-01
Small-group teaching and learning has achieved an admirable position in medical education and has become more popular as a means of encouraging the students in their studies and enhance the process of deep learning. The main characteristics of small group teaching are active involvement of the learners in entire learning cycle and well defined task orientation with achievable specific aims and objectives in a given time period. The essential components in the development of an ideal small group teaching and learning sessions are preliminary considerations at departmental and institutional level including educational strategies, group composition, physical environment, existing resources, diagnosis of the needs, formulation of the objectives and suitable teaching outline. Small group teaching increases the student interest, teamwork ability, retention of knowledge and skills, enhance transfer of concepts to innovative issues, and improve the self-directed learning. It develops self-motivation, investigating the issues, allows the student to test their thinking and higher-order activities. It also facilitates an adult style of learning, acceptance of personal responsibility for own progress. Moreover, it enhances student-faculty and peer-peer interaction, improves communication skills and provides opportunity to share the responsibility and clarify the points of bafflement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Fada; Peeler, Christopher; Taleei, Reza
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking stepmore » size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in GEANT 4. The incorrect LET{sub d} values lead to substantial differences in the calculated RBE. Conclusions: When the GEANT 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LET{sub t} in the dose plateau region and LET{sub d} around the Bragg peak. For a large step limit, i.e., 500 μm, LET{sub d} is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LET{sub d} and LET{sub t} becomes positive.« less
NASA Astrophysics Data System (ADS)
Yokoyama, Hiroshi; Tsukamoto, Yuichi; Kato, Chisachi; Iida, Akiyoshi
2007-10-01
Self-sustained oscillations with acoustic feedback take place in a flow over a two-dimensional two-step configuration: a small forward-backward facing step, which we hereafter call a bump, and a relatively large backward-facing step (backstep). These oscillations can radiate intense tonal sound and fatigue nearby components of industrial products. We clarify the mechanism of these oscillations by directly solving the compressible Navier-Stokes equations. The results show that vortices are shed from the leading edge of the bump and acoustic waves are radiated when these vortices pass the trailing edge of the backstep. The radiated compression waves shed new vortices by stretching the vortex formed by the flow separation at the leading edge of the bump, thereby forming a feedback loop. We propose a formula based on a detailed investigation of the phase relationship between the vortices and the acoustic waves for predicting the frequencies of the tonal sound. The frequencies predicted by this formula are in good agreement with those measured in the experiments we performed.
Sellers, Ceri; Dall, Philippa; Grant, Margaret; Stansfield, Ben
2016-01-01
Characterisation of free-living physical activity requires the use of validated and reliable monitors. This study reports an evaluation of the validity and reliability of the activPAL3 monitor for the detection of posture and stepping in both adults and young people. Twenty adults (median 27.6y; IQR22.6y) and 8 young people (12.0y; IQR4.1y) performed standardised activities and activities of daily living (ADL) incorporating sedentary, upright and stepping activity. Agreement, specificity and positive predictive value were calculated between activPAL3 outcomes and the gold-standard of video observation. Inter-device reliability was calculated between 4 monitors. Sedentary and upright times for standardised activities were within ±5% of video observation as was step count (excluding jogging) for both adults and young people. Jogging step detection accuracy reduced with increasing cadence >150stepsmin(-1). For ADLs, sensitivity to stepping was very low for adults (40.4%) but higher for young people (76.1%). Inter-device reliability was either good (ICC(1,1)>0.75) or excellent (ICC(1,1)>0.90) for all outcomes. An excellent level of detection of standardised postures was demonstrated by the activPAL3. Postures such as seat-perching, kneeling and crouching were misclassified when compared to video observation. The activPAL3 appeared to accurately detect 'purposeful' stepping during ADL, but detection of smaller stepping movements was poor. Small variations in outcomes between monitors indicated that differences in monitor placement or hardware may affect outcomes. In general, the detection of posture and purposeful stepping with the activPAL3 was excellent indicating that it is a suitable monitor for characterising free-living posture and purposeful stepping activity in healthy adults and young people. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy; Bates, Paul; Horritt, Matthew
2010-05-01
This abstract describes the development of a new set of equations derived from 1D shallow water theory for use in 2D storage cell inundation models. The new equation set is designed to be solved explicitly at very low computational cost, and is here tested against a suite of four analytical and numerical test cases of increasing complexity. In each case the predicted water depths compare favourably to analytical solutions or to benchmark results from the optimally stable diffusive storage cell code of Hunter et al. (2005). For the most complex test involving the fine spatial resolution simulation of flow in a topographically complex urban area the Root Mean Squared Difference between the new formulation and the model of Hunter et al. is ~1 cm. However, unlike diffusive storage cell codes where the stable time step scales with (1-?x)2 the new equation set developed here represents shallow water wave propagation and so the stability is controlled by the Courant-Freidrichs-Lewy condition such that the stable time step instead scales with 1-?x. This allows use of a stable time step that is 1-3 orders of magnitude greater for typical cell sizes than that possible with diffusive storage cell models and results in commensurate reductions in model run times. The maximum speed up achieved over a diffusive storage cell model was 1120x in these tests, although the actual value seen will depend on model resolution and water depth and surface gradient. Solutions using the new equation set are shown to be relatively grid-independent for the conditions considered given the numerical diffusion likely at coarse model resolution. In addition, the inertial formulation appears to have an intuitively correct sensitivity to friction, however small instabilities and increased errors on predicted depth were noted when Manning's n = 0.01. These small instabilities are likely to be a result of the numerical scheme employed, whereby friction is acting to stabilise the solution although this scheme is still widely used in practice. The new equations are likely to find widespread application in many types of flood inundation modelling and should provide a useful additional tool, alongside more established model formulations, for a variety of flood risk management studies.
A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation
Smith, Peter E.
2006-01-01
A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.
An Embedded Device for Real-Time Noninvasive Intracranial Pressure Estimation.
Matthews, Jonathan M; Fanelli, Andrea; Heldt, Thomas
2018-01-01
The monitoring of intracranial pressure (ICP) is indicated for diagnosing and guiding therapy in many neurological conditions. Current monitoring methods, however, are highly invasive, limiting their use to the most critically ill patients only. Our goal is to develop and test an embedded device that performs all necessary mathematical operations in real-time for noninvasive ICP (nICP) estimation based on a previously developed model-based approach that uses cerebral blood flow velocity (CBFV) and arterial blood pressure (ABP) waveforms. The nICP estimation algorithm along with the required preprocessing steps were implemented on an NXP LPC4337 microcontroller unit (MCU). A prototype device using the MCU was also developed, complete with display, recording functionality, and peripheral interfaces for ABP and CBFV monitoring hardware. The device produces an estimate of mean ICP once per minute and performs the necessary computations in 410 ms, on average. Real-time nICP estimates differed from the original batch-mode MATLAB implementation of theestimation algorithm by 0.63 mmHg (root-mean-square error). We have demonstrated that real-time nICP estimation is possible on a microprocessor platform, which offers the advantages of low cost, small size, and product modularity over a general-purpose computer. These attributes take a step toward the goal of real-time nICP estimation at the patient's bedside in a variety of clinical settings.
Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.
Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra
2016-11-20
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina
2017-05-01
Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.
Relativistic Positioning Systems and Gravitational Perturbations
NASA Astrophysics Data System (ADS)
Gomboc, Andreja; Kostić, Uroš; Horvat, Martin; Carloni, Sante; Delva, Pacôme
2013-11-01
In order to deliver a high accuracy relativistic positioning system, several gravitational perturbations need to be taken into account. We therefore consider a system of satellites, such as the Galileo system, in a space-time described by a background Schwarzschild metric and small gravitational perturbations due to the Earth’s rotation, multipoles and tides, and the gravity of the Moon, the Sun, and planets. We present the status of this work currently carried out in the ESA Slovenian PECS project Relativistic Global Navigation System, give the explicit expressions for the perturbed metric, and briefly outline further steps.
2004-04-15
Marshall Space Flight Center's researchers have conducted suborbital experiments with ZBLAN, an optical material capable of transmitting 100 times more signal and information than silica fibers. The next step is to process ZBLAN in a microgravity environment to stop the formation of crystallites, small crystals caused by a chemical imbalances. Scientists want to find a way to make ZBLAN an amorphous (without an internal shape) material. Producing a material such as this will have far-reaching implications on advanced communications, medical and manufacturing technologies using lasers, and a host of other products well into the 21st century.
2006-11-01
color images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18 . NUMBER OF PAGES 8 19a. NAME OF...Std Z39- 18 small problem domain can require millions of solution vari- ables solved repeatedly for tens of thousands of time steps. Finally, the...terms of vector and scalar potentials, A and ψ respec- tively. E = − ( ∂A ∂t +∇ψ ) = Erot + Eirr (5) Since the curl of a gradient is always zero, ∇ψ
NASA Technical Reports Server (NTRS)
Hazelton, Lyman R., Jr.
1990-01-01
Some of the logical components of a rule based planning and scheduling system are described. The researcher points out a deficiency in the conventional truth maintenance approach to this class of problems and suggests a new mechanism which overcomes the problem. This extension of the idea of justification truth maintenance may seem at first to be a small philosophical step. However, it embodies a process of basic human reasoning which is so common and automatic as to escape conscious detection without careful introspection. It is vital to any successful implementation of a rule based planning reasoner.
Undocumented migration to Venezuela.
Van Roy, R
1984-01-01
"In 1980 Venezuela took...steps to regularize the undocumented migrant population. While the number responding to the amnesty was small relative to expectations, the majority of illegals appeared to have regularized their status. For the first time it was possible to assess objectively the characteristics of the undocumented population. Moreover, the problem of illegal migrants seems to have been temporarily solved, a result of both the amnesty and the country's declining economic activity." Topics covered in the present article include the nationality, geographic distribution, sex and age distribution, educational status, and occupations of undocumented migrants. excerpt
An Efficient Pattern Mining Approach for Event Detection in Multivariate Temporal Data
Batal, Iyad; Cooper, Gregory; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos
2015-01-01
This work proposes a pattern mining approach to learn event detection models from complex multivariate temporal data, such as electronic health records. We present Recent Temporal Pattern mining, a novel approach for efficiently finding predictive patterns for event detection problems. This approach first converts the time series data into time-interval sequences of temporal abstractions. It then constructs more complex time-interval patterns backward in time using temporal operators. We also present the Minimal Predictive Recent Temporal Patterns framework for selecting a small set of predictive and non-spurious patterns. We apply our methods for predicting adverse medical events in real-world clinical data. The results demonstrate the benefits of our methods in learning accurate event detection models, which is a key step for developing intelligent patient monitoring and decision support systems. PMID:26752800
Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow
NASA Technical Reports Server (NTRS)
Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.
1977-01-01
An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.
Small Body Hopper Mobility Concepts
NASA Technical Reports Server (NTRS)
Howe, A. Scott; Gernhardt, Michael L.; Lee, Dave E.; Crues, E. Zack; Dexter, Dan E.; Abercromby, Andrew F. J.; Chappell, Steve P.; Nguyen, Hung T.
2015-01-01
A propellant-saving hopper mobility system was studied that could help facilitate the exploration of small bodies such as Phobos for long-duration human missions. The NASA Evolvable Mars Campaign (EMC) has proposed a mission to the moons of Mars as a transitional step for eventual Mars surface exploration. While a Mars transit habitat would be parked in High-Mars Orbit (HMO), crew members would visit the surface of Phobos multiple times for up to 14 days duration (up to 50 days at a time with logistics support). This paper describes a small body surface mobility concept that is capable of transporting a small, two-person Pressurized Exploration Vehicle (PEV) cabin to various sites of interest in the low-gravity environment. Using stored kinetic energy between bounces, a propellant-saving hopper mobility system can release the energy to vector the vehicle away from the surface in a specified direction. Alternatively, the stored energy can be retained for later use while the vehicle is stationary in respect to the surface. The hopper actuation was modeled using a variety of launch velocities, and the hopper mobility was evaluated using NASA Exploration Systems Simulations (NExSyS) for transit between surface sites of interest. A hopper system with linear electromagnetic motors and mechanical spring actuators coupled with Control Moment Gyroscope (CMG) for attitude control will use renewable electrical power, resulting in a significant propellant savings.
Niu, Dan; Zhao, Gang; Liu, Xiaoli; Zhou, Ping; Cao, Yunxia
2016-03-01
High-survival-rate cryopreservation of endothelial cells plays a critical role in vascular tissue engineering, while optimization of osmotic injuries is the first step toward successful cryopreservation. We designed a low-cost, easy-to-use, microfluidics-based microperfusion chamber to investigate the osmotic responses of human umbilical vein endothelial cells (HUVECs) at different temperatures, and then optimized the protocols for using cryoprotective agents (CPAs) to minimize osmotic injuries and improve processes before freezing and after thawing. The fundamental cryobiological parameters were measured using the microperfusion chamber, and then, the optimized protocols using these parameters were confirmed by survival evaluation and cell proliferation experiments. It was revealed for the first time that HUVECs have an unusually small permeability coefficient for Me2SO. Even at the concentrations well established for slow freezing of cells (1.5 M), one-step removal of CPAs for HUVECs might result in inevitable osmotic injuries, indicating that multiple-step removal is essential. Further experiments revealed that multistep removal of 1.5 M Me2SO at 25°C was the best protocol investigated, in good agreement with theory. These results should prove invaluable for optimization of cryopreservation protocols of HUVECs.
An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion
Chang, Yu-Bing; Xia, James J.; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo; Wong, Stephen T. C.
2017-01-01
In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method. PMID:20529735
Pater, Mackenzie L; Rosenblatt, Noah J; Grabiner, Mark D
2015-01-01
Tripping during locomotion, the leading cause of falls in older adults, generally occurs without prior warning and often while performing a secondary task. Prior warning can alter the state of physiological preparedness and beneficially influence the response to the perturbation. Previous studies have examined how altering the initial "preparedness" for an upcoming perturbation can affect kinematic responses following small disturbances that did not require a stepping response to restore dynamic stability. The purpose of this study was to examine how expectation affected fall outcome and recovery response kinematics following a large, treadmill-delivered perturbation simulating a trip and requiring at least one recovery step to avoid a fall. Following the perturbation, 47% of subjects fell when they were not expecting the perturbation whereas 12% fell when they were aware that the perturbation would occur "sometime in the next minute". The between-group differences were accompanied by slower reaction times in the non-expecting group (p < 0.01). Slower reaction times were associated with kinematics that have previously been shown to increase the likelihood of falling following a laboratory-induced trip. The results demonstrate the importance of considering the context under which recovery responses are assessed, and further, gives insight to the context during which task-specific perturbation training is administered. Copyright © 2014 Elsevier B.V. All rights reserved.
Motor potential profile and a robust method for extracting it from time series of motor positions.
Wang, Hongyun
2006-10-21
Molecular motors are small, and, as a result, motor operation is dominated by high-viscous friction and large thermal fluctuations from the surrounding fluid environment. The small size has hindered, in many ways, the studies of physical mechanisms of molecular motors. For a macroscopic motor, it is possible to observe/record experimentally the internal operation details of the motor. This is not yet possible for molecular motors. The chemical reaction in a molecular motor has many occupancy states, each having a different effect on the motor motion. The overall effect of the chemical reaction on the motor motion can be characterized by the motor potential profile. The potential profile reveals how the motor force changes with position in a motor step, which may lead to insights into how the chemical reaction is coupled to force generation. In this article, we propose a mathematical formulation and a robust method for constructing motor potential profiles from time series of motor positions measured in single molecule experiments. Numerical examples based on simulated data are shown to demonstrate the method. Interestingly, it is the small size of molecular motors (negligible inertia) that makes it possible to recover the potential profile from time series of motor positions. For a macroscopic motor, the variation of driving force within a cycle is smoothed out by the large inertia.
Southeast Hurricanes Small Business Disaster Relief Act of 2011
Sen. Landrieu, Mary L. [D-LA
2011-03-28
Senate - 03/28/2011 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Administration Disaster Recovery and Reform Act of 2009
Sen. Landrieu, Mary L. [D-LA
2009-11-05
Senate - 11/05/2009 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Southeast Hurricanes Small Business Disaster Relief Act of 2010
Sen. Landrieu, Mary L. [D-LA
2010-02-04
Senate - 02/04/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Health Information Technology Financing Act of 2009
Sen. Kerry, John F. [D-MA
2009-11-10
Senate - 11/10/2009 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Strengthening Our Economy Through Small Business Innovation Act of 2009
Sen. Feingold, Russell D. [D-WI
2009-01-08
Senate - 01/08/2009 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
A bill to extend the small business loan enhancements.
Sen. Snowe, Olympia J. [R-ME
2010-07-15
Senate - 07/15/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Training in Federal Contracting Certification Act of 2010
Sen. Snowe, Olympia J. [R-ME
2010-05-27
Senate - 05/27/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Fairness for Small Businesses in Federal Contracting Act of 2011
Sen. McCaskill, Claire [D-MO
2011-09-21
Senate - 09/21/2011 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Innovation to Job Creation Act of 2010
Sen. Schumer, Charles E. [D-NY
2010-04-20
Senate - 04/20/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Fairness in Women-Owned Small Business Contracting Act of 2012
Sen. Snowe, Olympia J. [R-ME
2012-03-07
Senate - 03/07/2012 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Investment Company Modernization Act of 2013
Sen. Risch, James E. [R-ID
2013-03-13
Senate - 03/14/2013 Committee on Small Business and Entrepreneurship. Hearings held. Hearings printed: S.Hrg. 113-309. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Fairness in Women-Owned Small Business Contracting Act of 2010
Sen. Snowe, Olympia J. [R-ME
2010-05-24
Senate - 05/24/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Real time aircraft fly-over noise discrimination
NASA Astrophysics Data System (ADS)
Genescà, M.; Romeu, J.; Pàmies, T.; Sánchez, A.
2009-06-01
A method for measuring aircraft noise time history with automatic elimination of simultaneous urban noise is presented in this paper. A 3 m-long 12-microphone sparse array has been proven to give good performance in a wide range of urban placements. Nowadays, urban placements have to be avoided because their background noise has a great influence on the measurements made by sound level meters or single microphones. Because of the small device size and low number of microphones (that make it so easy to set up), the resolution of the device is not high enough to provide a clean aircraft noise time history by only applying frequency domain beamforming to the spatial cross-correlations of the microphones' signals. Therefore, a new step to the processing algorithm has been added to eliminate this handicap.
Discrete transparent boundary conditions for the mixed KDV-BBM equation
NASA Astrophysics Data System (ADS)
Besse, Christophe; Noble, Pascal; Sanchez, David
2017-09-01
In this paper, we consider artificial boundary conditions for the linearized mixed Korteweg-de Vries (KDV) and Benjamin-Bona-Mahoney (BBM) equation which models water waves in the small amplitude, large wavelength regime. Continuous (respectively discrete) artificial boundary conditions involve non local operators in time which in turn requires to compute time convolutions and invert the Laplace transform of an analytic function (respectively the Z-transform of an holomorphic function). In this paper, we propose a new, stable and fairly general strategy to carry out this crucial step in the design of transparent boundary conditions. For large time simulations, we also introduce a methodology based on the asymptotic expansion of coefficients involved in exact direct transparent boundary conditions. We illustrate the accuracy of our methods for Gaussian and wave packets initial data.
Positivity-preserving dual time stepping schemes for gas dynamics
NASA Astrophysics Data System (ADS)
Parent, Bernard
2018-05-01
A new approach at discretizing the temporal derivative of the Euler equations is here presented which can be used with dual time stepping. The temporal discretization stencil is derived along the lines of the Cauchy-Kowalevski procedure resulting in cross differences in spacetime but with some novel modifications which ensure the positivity of the discretization coefficients. It is then shown that the so-obtained spacetime cross differences result in changes to the wave speeds and can thus be incorporated within Roe or Steger-Warming schemes (with and without reconstruction-evolution) simply by altering the eigenvalues. The proposed approach is advantaged over alternatives in that it is positivity-preserving for the Euler equations. Further, it yields monotone solutions near discontinuities while exhibiting a truncation error in smooth regions less than the one of the second- or third-order accurate backward-difference-formula (BDF) for either small or large time steps. The high resolution and positivity preservation of the proposed discretization stencils are independent of the convergence acceleration technique which can be set to multigrid, preconditioning, Jacobian-free Newton-Krylov, block-implicit, etc. Thus, the current paper also offers the first implicit integration of the time-accurate Euler equations that is positivity-preserving in the strict sense (that is, the density and temperature are guaranteed to remain positive). This is in contrast to all previous positivity-preserving implicit methods which only guaranteed the positivity of the density, not of the temperature or pressure. Several stringent reacting and inert test cases confirm the positivity-preserving property of the proposed method as well as its higher resolution and higher computational efficiency over other second-order and third-order implicit temporal discretization strategies.
Multi-functional foot use during running in the zebra-tailed lizard (Callisaurus draconoides).
Li, Chen; Hsieh, S Tonia; Goldman, Daniel I
2012-09-15
A diversity of animals that run on solid, level, flat, non-slip surfaces appear to bounce on their legs; elastic elements in the limbs can store and return energy during each step. The mechanics and energetics of running in natural terrain, particularly on surfaces that can yield and flow under stress, is less understood. The zebra-tailed lizard (Callisaurus draconoides), a small desert generalist with a large, elongate, tendinous hind foot, runs rapidly across a variety of natural substrates. We use high-speed video to obtain detailed three-dimensional running kinematics on solid and granular surfaces to reveal how leg, foot and substrate mechanics contribute to its high locomotor performance. Running at ~10 body lengths s(-1) (~1 m s(-1)), the center of mass oscillates like a spring-mass system on both substrates, with only 15% reduction in stride length on the granular surface. On the solid surface, a strut-spring model of the hind limb reveals that the hind foot saves ~40% of the mechanical work needed per step, significant for the lizard's small size. On the granular surface, a penetration force model and hypothesized subsurface foot rotation indicates that the hind foot paddles through fluidized granular medium, and that the energy lost per step during irreversible deformation of the substrate does not differ from the reduction in the mechanical energy of the center of mass. The upper hind leg muscles must perform three times as much mechanical work on the granular surface as on the solid surface to compensate for the greater energy lost within the foot and to the substrate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya Shankar; McCulloch, Richard Chet James
In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less
DOT National Transportation Integrated Search
1988-03-01
Users of the manual are expected to be in divisions responsible for pedestrian safety in cities, counties, and other jurisdictions. The users manual outlines a step-by-step procedure to measure pedestrian volumes using small count intervals. Appendix...
Kalman filter implementation for small satellites using constraint GPS data
NASA Astrophysics Data System (ADS)
Wesam, Elmahy M.; Zhang, Xiang; Lu, Zhengliang; Liao, Wenhe
2017-06-01
Due to the increased need for autonomy, an Extended Kalman Filter (EKF) has been designed to autonomously estimate the orbit using GPS data. A propagation step models the satellite dynamics as a two body with J2 (second zonal effect) perturbations being suitable for orbits in altitudes higher than 600 km. An onboard GPS receiver provides continuous measurement inputs. The continuity of measurements decreases the errors of the orbit determination algorithm. Power restrictions are imposed on small satellites in general and nanosatellites in particular. In cubesats, the GPS is forced to be shut down most of the mission’s life time. GPS is turned on when experiments like atmospheric ones are carried out and meter level accuracy for positioning is required. This accuracy can’t be obtained by other autonomous sensors like magnetometer and sun sensor as they provide kilometer level accuracy. Through simulation using Matlab and satellite tool kit (STK) the position accuracy is analyzed after imposing constrained conditions suitable for small satellites and a very tight one suitable for nanosatellite missions.
Small Molecule Activation by Intermolecular Zr(IV)-Phosphine Frustrated Lewis Pairs.
Metters, Owen J; Forrest, Sebastian J K; Sparkes, Hazel A; Manners, Ian; Wass, Duncan F
2016-02-17
We report intermolecular transition metal frustrated Lewis pairs (FLPs) based on zirconocene aryloxide and phosphine moieties that exhibit a broad range of small molecule activation chemistry that has previously been the preserve of only intramolecular pairs. Reactions with D2, CO2, THF, and PhCCH are reported. By contrast with previous intramolecular examples, these systems allow facile access to a variety of steric and electronic characteristics at the Lewis acidic and Lewis basic components, with the three-step syntheses of 10 new intermolecular transition metal FLPs being reported. Systematic variation to the phosphine Lewis base is used to unravel steric considerations, with the surprising conclusion that phosphines with relatively small Tolman steric parameters not only give highly reactive FLPs but are often seen to have the highest selectivity for the desired product. DOSY NMR spectroscopic studies on these systems reveal for the first time the nature of the Lewis acid/Lewis base interactions in transition metal FLPs of this type.
Torque Characteristics Analysis of Hybrid Stepping Motor Using 3-D Finite Element Method
NASA Astrophysics Data System (ADS)
Kawase, Yoshihiro; Yamaguchi, Tadashi; Masuda, Tatsuya; Domeki, Hideo; Kobori, Masaru
Hybrid stepping motors are widely used for various electric instruments because of high torque, high accuracy and small step angle. It is necessary for the optimum design of hybrid stepping motors to analyze torque characteristics accurately. In this paper, a hybrid stepping motor is analyzed using the 3-D finite element method taking into account the rotation of the armature. The effects of the interlaminar gap in the core on the torque characteristics are clarified using the gap elements. The validity of our method is clarified by comparison between the calculated results and measured ones.
Surface Modified Particles By Multi-Step Addition And Process For The Preparation Thereof
Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew
2006-01-17
The present invention relates to a new class of surface modified particles and to a multi-step surface modification process for the preparation of the same. The multi-step surface functionalization process involves two or more reactions to produce particles that are compatible with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through organic linking groups.
Raffoux, Xavier; Bourge, Mickael; Dumas, Fabrice; Martin, Olivier C; Falque, Matthieu
2018-06-01
Allelic recombination owing to meiotic crossovers is a major driver of genome evolution, as well as a key player for the selection of high-performing genotypes in economically important species. Therefore, we developed a high-throughput and low-cost method to measure recombination rates and crossover patterning (including interference) in large populations of the budding yeast Saccharomyces cerevisiae. Recombination and interference were analysed by flow cytometry, which allows time-consuming steps such as tetrad microdissection or spore growth to be avoided. Moreover, our method can also be used to compare recombination in wild-type vs. mutant individuals or in different environmental conditions, even if the changes in recombination rates are small. Furthermore, meiotic mutants often present recombination and/or pairing defects affecting spore viability but our method does not involve growth steps and thus avoids filtering out non-viable spores. Copyright © 2018 John Wiley & Sons, Ltd.
Memoryless self-reinforcing directionality in endosomal active transport within living cells
NASA Astrophysics Data System (ADS)
Chen, Kejia; Wang, Bo; Granick, Steve
2015-06-01
In contrast to Brownian transport, the active motility of microbes, cells, animals and even humans often follows another random process known as truncated Lévy walk. These stochastic motions are characterized by clustered small steps and intermittent longer jumps that often extend towards the size of the entire system. As there are repeated suggestions, although disagreement, that Lévy walks have functional advantages over Brownian motion in random searching and transport kinetics, their intentional engineering into active materials could be useful. Here, we show experimentally in the classic active matter system of intracellular trafficking that Brownian-like steps self-organize into truncated Lévy walks through an apparent time-independent positive feedback such that directional persistence increases with the distance travelled persistently. A molecular model that allows the maximum output of the active propelling forces to fluctuate slowly fits the experiments quantitatively. Our findings offer design principles for programming efficient transport in active materials.
In-situ data collection at the photon factory macromolecular crystallography beamlines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Yusuke, E-mail: yusuke.yamada@kek.jp; Matsugaki, Naohiro; Kato, Ryuichi
Crystallization trial is one of the most important but time-consuming steps in macromolecular crystallography, and in-situ diffraction experiment has a capability to make researchers to proceed this step more efficiently. At the Photon Factory, a new tabletop diffractometer for in-situ diffraction experiments has been developed. It consists of XYZ translation stages with a plate handler, an on-axis viewing system and a plate rack with a capacity for ten crystallization plates. These components sit on a common plate and are able to be placed on the existing diffractometer table. The CCD detector with a large active area and a pixel arraymore » detector with a small active area are used for acquiring diffraction images from crystals. Dedicated control software and a user interface have also been developed. The new diffractometer has been operational for users and used for evaluation of crystallization screening since 2014.« less
Intelligent cooperation: A framework of pedagogic practice in the operating room.
Sutkin, Gary; Littleton, Eliza B; Kanter, Steven L
2018-04-01
Surgeons who work with trainees must address their learning needs without compromising patient safety. We used a constructivist grounded theory approach to examine videos of five teaching surgeries. Attending surgeons were interviewed afterward while watching cued videos of their cases. Codes were iteratively refined into major themes, and then constructed into a larger framework. We present a novel framework, Intelligent Cooperation, which accounts for the highly adaptive, iterative features of surgical teaching in the operating room. Specifically, we define Intelligent Cooperation as a sequence of coordinated exchanges between attending and trainee that accomplishes small surgical steps while simultaneously uncovering the trainee's learning needs. Intelligent Cooperation requires the attending to accurately determine learning needs, perform real-time needs assessment, provide critical scaffolding, and work with the learner to accomplish the next step in the surgery. This is achieved through intense, coordinated verbal and physical cooperation. Copyright © 2017 Elsevier Inc. All rights reserved.
Evaluation of complex gonioapparent samples using a bidirectional spectrometer.
Rogelj, Nina; Penttinen, Niko; Gunde, Marta Klanjšek
2015-08-24
Many applications use gonioapparent targets whose appearance depends on irradiation and viewing angles; the strongest effects are provided by light diffraction. These targets, optically variable devices (OVDs), are used in both security and authentication applications. This study introduces a bidirectional spectrometer, which enables to analyze samples with most complex angular and spectral properties. In our work, the spectrometer is evaluated with samples having very different types of reflection, concerning spectral and angular distributions. Furthermore, an OVD containing several different grating patches is evaluated. The device uses automatically adjusting exposure time to provide maximum signal dynamics and is capable of doing steps as small as 0.01°. However, even 2° steps for the detector movement showed that this device is more than capable of characterizing even the most complex reflecting surfaces. This study presents sRGB visualizations, discussion of bidirectional reflection, and accurate grating period calculations for all of the grating samples used.
Numerical simulation of the flow field and fuel sprays in an IC engine
NASA Technical Reports Server (NTRS)
Nguyen, H. L.; Schock, H. J.; Ramos, J. I.; Carpenter, M. H.; Stegeman, J. D.
1987-01-01
A two-dimensional model for axisymmetric piston-cylinder configurations is developed to study the flow field in two-stroke direct-injection Diesel engines under motored conditions. The model accounts for turbulence by a two-equation model for the turbulence kinetic energy and its rate of dissipation. A discrete droplet model is used to simulate the fuel spray, and the effects of the gas phase turbulence on the droplets is considered. It is shown that a fluctuating velocity can be added to the mean droplet velocity every time step if the step is small enough. Good agreement with experimental data is found for a range of ambient pressures in Diesel engine-type microenvironments. The effects of the intake swirl angle in the spray penetration, vaporization, and mixing in a uniflow-scavenged two-stroke Diesel engine are analyzed. It is found that the swirl increases the gas phase turbulence levels and the rates of vaporization.
Tracing information flow on a global scale using Internet chain-letter data
Liben-Nowell, David; Kleinberg, Jon
2008-01-01
Although information, news, and opinions continuously circulate in the worldwide social network, the actual mechanics of how any single piece of information spreads on a global scale have been a mystery. Here, we trace such information-spreading processes at a person-by-person level using methods to reconstruct the propagation of massively circulated Internet chain letters. We find that rather than fanning out widely, reaching many people in very few steps according to “small-world” principles, the progress of these chain letters proceeds in a narrow but very deep tree-like pattern, continuing for several hundred steps. This suggests a new and more complex picture for the spread of information through a social network. We describe a probabilistic model based on network clustering and asynchronous response times that produces trees with this characteristic structure on social-network data. PMID:18353985
Aquifer response to stream-stage and recharge variations. I. Analytical step-response functions
Moench, A.F.; Barlow, P.M.
2000-01-01
Laplace transform step-response functions are presented for various homogeneous confined and leaky aquifer types and for anisotropic, homogeneous unconfined aquifers interacting with perennial streams. Flow is one-dimensional, perpendicular to the stream in the confined and leaky aquifers, and two-dimensional in a plane perpendicular to the stream in the water-table aquifers. The stream is assumed to penetrate the full thickness of the aquifer. The aquifers may be semi-infinite or finite in width and may or may not be bounded at the stream by a semipervious streambank. The solutions are presented in a unified manner so that mathematical relations among the various aquifer configurations are clearly demonstrated. The Laplace transform solutions are inverted numerically to obtain the real-time step-response functions for use in the convolution (or superposition) integral. To maintain linearity in the case of unconfined aquifers, fluctuations in the elevation of the water table are assumed to be small relative to the saturated thickness, and vertical flow into or out of the zone above the water table is assumed to occur instantaneously. Effects of hysteresis in the moisture distribution above the water table are therefore neglected. Graphical comparisons of the new solutions are made with known closed-form solutions.Laplace transform step-response functions are presented for various homogeneous confined and leaky aquifer types and for anisotropic, homogeneous unconfined aquifers interacting with perennial streams. Flow is one-dimensional, perpendicular to the stream in the confined and leaky aquifers, and two-dimensional in a plane perpendicular to the stream in the water-table aquifers. The stream is assumed to penetrate the full thickness of the aquifer. The aquifers may be semi-infinite or finite in width and may or may not be bounded at the stream by a semipervious streambank. The solutions are presented in a unified manner so that mathematical relations among the various aquifer configurations are clearly demonstrated. The Laplace transform solutions are inverted numerically to obtain the real-time step-response functions for use in the convolution (or superposition) integral. To maintain linearity in the case of unconfined aquifers, fluctuations in the elevation of the water table are assumed to be small relative to the saturated thickness, and vertical flow into or out of the zone above the water table is assumed to occur instantaneously. Effects of hysteresis in the moisture distribution above the water table are therefore neglected. Graphical comparisons of the new solutions are made with known closed-form solutions.
Comparison of detectability in step-and-shoot mode and continuous mode digital tomosynthesis systems
NASA Astrophysics Data System (ADS)
Lee, Changwoo; Han, Minah; Baek, Jongduk
2017-03-01
Digital tomosynthesis system has been widely used in chest, dental, and breast imaging. Since the digital tomosynthesis system provides volumetric images from multiple projection data, structural noise inherent in X-ray radiograph can be reduced, and thus signal detection performance is improved. Currently, tomosynthesis system uses two data acquisition modes: step-and-shoot mode and continuous mode. Several studies have been conducted to compare the system performance of two acquisition modes with respect to spatial resolution and contrast. In this work, we focus on signal detectability in step-and-shoot mode and continuous mode. For evaluation, uniform background is considered, and eight spherical objects with diameters of 0.5, 0.8, 1, 2, 3, 5, 8, 10 mm are used as signals. Projection data with and without spherical objects are acquired in step-and-shoot mode and continuous mode, respectively, and quantum noise are added. Then, noisy projection data are reconstructed by FDK algorithm. To compare the detection performance of two acquisition modes, we calculate task signal-to-noise ratio (SNR) of channelized Hotelling observer with Laguerre-Gauss channels for each spherical object. While the task-SNR values of two acquisition modes are similar for spherical objects larger than 1 mm diameter, step-and-shoot mode yields higher detectability for small signal sizes. The main reason of this behavior is that small signal is more affected by X-ray tube motion blur than large signal. Our results indicate that it is beneficial to use step-and-shoot data acquisition mode to improve the detectability of small signals (i.e., less than 1 mm diameter) in digital tomosynthesis systems.
RISC assembly: Coordination between small RNAs and Argonaute proteins.
Kobayashi, Hotaka; Tomari, Yukihide
2016-01-01
Non-coding RNAs generally form ribonucleoprotein (RNP) complexes with their partner proteins to exert their functions. Small RNAs, including microRNAs, small interfering RNAs, and PIWI-interacting RNAs, assemble with Argonaute (Ago) family proteins into the effector complex called RNA-induced silencing complex (RISC), which mediates sequence-specific target gene silencing. RISC assembly is not a simple binding between a small RNA and Ago; rather, it follows an ordered multi-step pathway that requires specific accessory factors. Some steps of RISC assembly and RISC-mediated gene silencing are dependent on or facilitated by particular intracellular platforms, suggesting their spatial regulation. In this review, we summarize the currently known mechanisms for RISC assembly of each small RNA class and propose a revised model for the role of the chaperone machinery in the duplex-initiated RISC assembly pathway. This article is part of a Special Issue entitled: Clues to long noncoding RNA taxonomy1, edited by Dr. Tetsuro Hirose and Dr. Shinichi Nakagawa. Copyright © 2015 Elsevier B.V. All rights reserved.
Wang, Poguang; Giese, Roger W.
2017-01-01
Matrix-assisted laser desorption ionization mass spectrometry (MALDI-MS) has been used for quantitative analysis of small molecules for many years. It is usually preceded by an LC separation step when complex samples are tested. With the development several years ago of “modern MALDI” (automation, high repetition laser, high resolution peaks), the ease of use and performance of MALDI as a quantitative technique greatly increased. This review focuses on practical aspects of modern MALDI for quantitation of small molecules conducted in an ordinary way (no special reagents, devices or techniques for the spotting step of MALDI), and includes our ordinary, preferred Methods The review is organized as 18 recommendations with accompanying explanations, criticisms and exceptions. PMID:28118972
Atomically Flat Surfaces Developed for Improved Semiconductor Devices
NASA Technical Reports Server (NTRS)
Powell, J. Anthony
2001-01-01
New wide bandgap semiconductor materials are being developed to meet the diverse high temperature, -power, and -frequency demands of the aerospace industry. Two of the most promising emerging materials are silicon carbide (SiC) for high-temperature and high power applications and gallium nitride (GaN) for high-frequency and optical (blue-light-emitting diodes and lasers) applications. This past year Glenn scientists implemented a NASA-patented crystal growth process for producing arrays of device-size mesas whose tops are atomically flat (i.e., step-free). It is expected that these mesas can be used for fabricating SiC and GaN devices with major improvements in performance and lifetime. The promising new SiC and GaN devices are fabricated in thin-crystal films (known as epi films) that are grown on commercial single-crystal SiC wafers. At this time, no commercial GaN wafers exist. Crystal defects, known as screw defects and micropipes, that are present in the commercial SiC wafers propagate into the epi films and degrade the performance and lifetime of subsequently fabricated devices. The new technology isolates the screw defects in a small percentage of small device-size mesas on the surface of commercial SiC wafers. This enables atomically flat surfaces to be grown on the remaining defect-free mesas. We believe that the atomically flat mesas can also be used to grow GaN epi films with a much lower defect density than in the GaN epi films currently being grown. Much improved devices are expected from these improved low-defect epi films. Surface-sensitive SiC devices such as Schottky diodes and field effect transistors should benefit from atomically flat substrates. Also, we believe that the atomically flat SiC surface will be an ideal surface on which to fabricate nanoscale sensors and devices. The process for achieving atomically flat surfaces is illustrated. The surface steps present on the "as-received" commercial SiC wafer is also illustrated. because of the small tilt angle between the crystal "basal" plane and the polished wafer surface. These steps are used in normal SiC epi film growth in a process known as stepflow growth to produce material for device fabrication. In the new process, the first step is to etch an array of mesas on the SiC wafer top surface. Then, epi film growth is carried out in the step flow fashion until all steps have grown themselves out of existence on each defect-free mesa. If the size of the mesas is sufficiently small (about 0.1 by 0.1 mm), then only a small percentage of the mesas will contain an undesired screw defect. Mesas with screw defects supply steps during the growth process, allowing a rough surface with unwanted hillocks to form on the mesa. The improvement in SiC epi surface morphology achievable with the new technology is shown. An atomic force microscope image of a typical SiC commercial epilayer surface is also shown. A similar image of an SiC atomically flat epi surface grown in a Glenn laboratory is given. With the current screw defect density of commercial wafers (about 5000 defects/cm2), the yield of atomically free 0.1 by 0.l mm mesas is expected to be about 90 percent. This is large enough for many types of electronic and optical devices. The implementation of this new technology was recently published in Applied Physics Letters. This work was initially carried out in-house under a Director's Discretionary Fund project and is currently being further developed under the Information Technology Base Program.
Automating Media Centers and Small Libraries: A Microcomputer-Based Approach.
ERIC Educational Resources Information Center
Meghabghab, Dania Bilal
Although the general automation process can be applied to most libraries, small libraries and media centers require a customized approach. Using a systematic approach, this guide covers each step and aspect of automation in a small library setting, and combines the principles of automation with field- tested activities. After discussing needs…
Energy Conservation in Small Schools. Small Schools Digest.
ERIC Educational Resources Information Center
Gardener, Clark
Information concerning methods and available materials for conserving energy is needed by small, rural schools to offset continued increasing energy costs and lack of financial support and technical assistance. The first step in developing an energy conservation policy is to obtain school board commitment and to establish an energy saving policy.…
Rep. Miller, Brad [D-NC-13
2010-09-23
Senate - 09/29/2010 Received in the Senate and Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Why small business is sick over health costs.
Carpenter, Dave
2003-11-01
Overwhelmed by the cost of paying for health coverage, many small employers see their only options as cutting coverage, cutting staff or going out of business--any of which is bad news for communities and hospitals. There are creative alternatives to traditional insurance, and experts advise small businesses to explore those before taking drastic steps.
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
Small Business Broadband and Emerging Information Technology Enhancement Act of 2011
Sen. Landrieu, Mary L. [D-LA
2011-02-02
Senate - 02/02/2011 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Broadband and Emerging Information Technology Enhancement Act of 2010
Sen. Landrieu, Mary L. [D-LA
2010-06-17
Senate - 06/17/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Improving Opportunities for Service-Disabled Veteran-Owned Small Businesses Act of 2014
Sen. King, Angus S., Jr. [I-ME
2014-05-14
Senate - 05/14/2014 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Native American Small Business Assistance and Entrepreneurial Growth Act of 2010
Sen. Landrieu, Mary L. [D-LA
2010-06-24
Senate - 06/24/2010 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
From Air Temperature to Lake Evaporation on a Daily Time Step: A New Empirical Approach
NASA Astrophysics Data System (ADS)
Welch, C.; Holmes, T. L.; Stadnyk, T. A.
2016-12-01
Lake evaporation is a key component of the water balance in much of Canada due to the vast surface area covered by open water. Hence, incorporating this flux effectively into hydrological simulation frameworks is essential to effective water management. Inclusion has historically been limited by the intensive data required to apply the energy budget methods previously demonstrated to most effectively capture the timing and volume of the evaporative flux. Widespread, consistent, lake water temperature and net radiation data are not available across much of Canada, particularly the sparsely populated boreal shield. We present a method to estimate lake evaporation on a daily time step that consists of a series of empirical equations applicable to lakes of widely varying morphologies. Specifically, estimation methods that require the single meteorological variable of air temperature are presented for lake water temperature, net radiation, and heat flux. The methods were developed using measured data collected at two small Boreal shield lakes, Lake Winnipeg North and South basins, and Lake Superior in 2008 and 2009. The mean average error (MAE) of the lake water temperature estimates is generally 1.5°C, and the MAE of the heat flux method is 50 W m-2. The simulated values are combined to estimate daily lake evaporation using the Priestley-Taylor method. Heat storage within the lake is tracked and limits the potential heat flux from a lake. Five-day running averages compare well to measured evaporation at the two small shield lakes (Bowen Ratio Energy Balance) and adequately to Lake Superior (eddy covariance). In addition to air temperature, the method requires a mean depth for each lake. The method demonstrably improves the timing and volume of evaporative flux in comparison to existing evaporation methods that depend only on temperature. The method will be further tested in a semi-distributed hydrological model to assess the cumulative effects across a lake-dominated catchment in the Lower Nelson River basin.
NASA Technical Reports Server (NTRS)
Wrigley, Chris J.; Hancock, Bruce R.; Newton, Kenneth W.; Cunningham, Thomas J.
2013-01-01
Single-slope analog-to-digital converters (ADCs) are particularly useful for onchip digitization in focal plane arrays (FPAs) because of their inherent monotonicity, relative simplicity, and efficiency for column-parallel applications, but they are comparatively slow. Squareroot encoding can allow the number of code values to be reduced without loss of signal-to-noise ratio (SNR) by keeping the quantization noise just below the signal shot noise. This encoding can be implemented directly by using a quadratic ramp. The reduction in the number of code values can substantially increase the quantization speed. However, in an FPA, the fixed pattern noise (FPN) limits the use of small quantization steps at low signal levels. If the zero-point is adjusted so that the lowest column is onscale, the other columns, including those at the center of the distribution, will be pushed up the ramp where the quantization noise is higher. Additionally, the finite frequency response of the ramp buffer amplifier and the comparator distort the shape of the ramp, so that the effective ramp value at the time the comparator trips differs from the intended value, resulting in errors. Allowing increased settling time decreases the quantization speed, while increasing the bandwidth increases the noise. The FPN problem is solved by breaking the ramp into two portions, with some fraction of the available code values allocated to a linear ramp and the remainder to a quadratic ramp. To avoid large transients, both the value and the slope of the linear and quadratic portions should be equal where they join. The span of the linear portion must cover the minimum offset, but not necessarily the maximum, since the fraction of the pixels above the upper limit will still be correctly quantized, albeit with increased quantization noise. The required linear span, maximum signal and ratio of quantization noise to shot noise at high signal, along with the continuity requirement, determines the number of code values that must be allocated to each portion. The distortion problem is solved by using a lookup table to convert captured code values back to signal levels. The values in this table will be similar to the intended ramp value, but with a correction for the finite bandwidth effects. Continuous-time comparators are used, and their bandwidth is set below the step rate, which smoothes the ramp and reduces the noise. No settling time is needed, as would be the case for clocked comparators, but the low bandwidth enhances the distortion of the non-linear portion. This is corrected by use of a return lookup table, which differs from the one used to generate the ramp. The return lookup table is obtained by calibrating against a stepped precision DC reference. This results in a residual non-linearity well below the quantization noise. This method can also compensate for differential non-linearity (DNL) in the DAC used to generate the ramp. The use of a ramp with a combination of linear and quadratic portions for a single-slope ADC is novel. The number of steps is minimized by keeping the step size just below the photon shot noise. This in turn maximizes the speed of the conversion. High resolution is maintained by keeping small quantization steps at low signals, and noise is minimized by allowing the lowest analog bandwidth, all without increasing the quantization noise. A calibrated return lookup table allows the system to maintain excellent linearity.
NASA Astrophysics Data System (ADS)
Ho, Tzung-Hsien; Trisno, Sugianto; Smolyaninov, Igor I.; Milner, Stuart D.; Davis, Christopher C.
2004-02-01
Free space, dynamic, optical wireless communications will require topology control for optimization of network performance. Such networks may need to be configured for bi- or multiple-connectedness, reliability and quality-of-service. Topology control involves the introduction of new links and/or nodes into the network to achieve such performance objectives through autonomous reconfiguration as well as precise pointing, acquisition, tracking, and steering of laser beams. Reconfiguration may be required because of link degradation resulting from obscuration or node loss. As a result, the optical transceivers may need to be re-directed to new or existing nodes within the network and tracked on moving nodes. The redirection of transceivers may require operation over a whole sphere, so that small-angle beam steering techniques cannot be applied. In this context, we are studying the performance of optical wireless links using lightweight, bi-static transceivers mounted on high-performance stepping motor driven stages. These motors provide an angular resolution of 0.00072 degree at up to 80,000 steps per second. This paper focuses on the performance characteristics of these agile transceivers for pointing, acquisition, and tracking (PAT), including the influence of acceleration/deceleration time, motor angular speed, and angular re-adjustment, on latency and packet loss in small free space optical (FSO) wireless test networks.
Quantum mechanical hydrogen tunneling in bacterial copper amine oxidase reaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murakawa, Takeshi; Okajima, Toshihide; Kuroda, Shun'ichi
A key step decisively affecting the catalytic efficiency of copper amine oxidase is stereospecific abstraction of substrate {alpha}-proton by a conserved Asp residue. We analyzed this step by pre-steady-state kinetics using a bacterial enzyme and stereospecifically deuterium-labeled substrates, 2-phenylethylamine and tyramine. A small and temperature-dependent kinetic isotope effect (KIE) was observed with 2-phenylethylamine, whereas a large and temperature-independent KIE was observed with tyramine in the {alpha}-proton abstraction step, showing that this step is driven by quantum mechanical hydrogen tunneling rather than the classical transition-state mechanism. Furthermore, an Arrhenius-type preexponential factor ratio approaching a transition-state value was obtained in the reactionmore » of a mutant enzyme lacking the critical Asp. These results provide strong evidence for enzyme-enhanced hydrogen tunneling. X-ray crystallographic structures of the reaction intermediates revealed a small difference in the binding mode of distal parts of substrates, which would modulate hydrogen tunneling proceeding through either active or passive dynamics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ojeda-Gonzalez, A.; Prestes, A.; Klausner, V.
Spatio-temporal entropy (STE) analysis is used as an alternative mathematical tool to identify possible magnetic cloud (MC) candidates. We analyze Interplanetary Magnetic Field (IMF) data using a time interval of only 10 days. We select a convenient data interval of 2500 records moving forward by 200 record steps until the end of the time series. For every data segment, the STE is calculated at each step. During an MC event, the STE reaches values close to zero. This extremely low value of STE is due to MC structure features. However, not all of the magnetic components in MCs have STEmore » values close to zero at the same time. For this reason, we create a standardization index (the so-called Interplanetary Entropy, IE, index). This index is a worthwhile effort to develop new tools to help diagnose ICME structures. The IE was calculated using a time window of one year (1999), and it has a success rate of 70% over other identifiers of MCs. The unsuccessful cases (30%) are caused by small and weak MCs. The results show that the IE methodology identified 9 of 13 MCs, and emitted nine false alarm cases. In 1999, a total of 788 windows of 2500 values existed, meaning that the percentage of false alarms was 1.14%, which can be considered a good result. In addition, four time windows, each of 10 days, are studied, where the IE method was effective in finding MC candidates. As a novel result, two new MCs are identified in these time windows.« less
Novel system for distant assessment of cataract surgical quality in rural China.
Wang, Lanhua; Xu, Danping; Liu, Bin; Jin, Ling; Wang, Decai; He, Mingguang; Congdon, Nathan G; Huang, Wenyong
2015-01-01
This study aims to assess the quality of various steps of manual small incision cataract surgery and predictors of quality, using video recordings. This paper applies a retrospective study. Fifty-two trainees participated in a hands-on small incision cataract surgery training programme at rural Chinese hospitals. Trainees provided one video each recorded by a tripod-mounted digital recorder after completing a one-week theoretical course and hands-on training monitored by expert trainers. Videos were graded by two different experts, using a 4-point scale developed by the International Council of Ophthalmology for each of 12 surgical steps and six global factors. Grades ranged from 2 (worst) to 5 (best), with a score of 0 if the step was performed by trainers. Mean score for the performance of each cataract surgical step rated by trainers. Videos and data were available for 49/52 trainees (94.2%, median age 38 years, 16.3% women and 77.5% completing > 50 training cases). The majority (53.1%, 26/49) had performed ≤ 50 cataract surgeries prior to training. Kappa was 0.57∼0.98 for the steps (mean 0.85). Poorest-rated steps were draping the surgical field (mean ± standard deviation = 3.27 ± 0.78), hydro-dissection (3.88 ± 1.22) and wound closure (3.92 ± 1.03), and top-rated steps were insertion of viscoelastic (4.96 ± 0.20) and anterior chamber entry (4.69 ± 0.74). In linear regression models, higher total score was associated with younger age (P = 0.015) and having performed >50 independent manual small incision cases (P = 0.039). More training should be given to preoperative draping, which is poorly performed and crucial in preventing infection. Surgical experience improves ratings. © 2015 Royal Australian and New Zealand College of Ophthalmologists.
OpenGeoSys-GEMS: Hybrid parallelization of a reactive transport code with MPI and threads
NASA Astrophysics Data System (ADS)
Kosakowski, G.; Kulik, D. A.; Shao, H.
2012-04-01
OpenGeoSys-GEMS is a generic purpose reactive transport code based on the operator splitting approach. The code couples the Finite-Element groundwater flow and multi-species transport modules of the OpenGeoSys (OGS) project (http://www.ufz.de/index.php?en=18345) with the GEM-Selektor research package to model thermodynamic equilibrium of aquatic (geo)chemical systems utilizing the Gibbs Energy Minimization approach (http://gems.web.psi.ch/). The combination of OGS and the GEM-Selektor kernel (GEMS3K) is highly flexible due to the object-oriented modular code structures and the well defined (memory based) data exchange modules. Like other reactive transport codes, the practical applicability of OGS-GEMS is often hampered by the long calculation time and large memory requirements. • For realistic geochemical systems which might include dozens of mineral phases and several (non-ideal) solid solutions the time needed to solve the chemical system with GEMS3K may increase exceptionally. • The codes are coupled in a sequential non-iterative loop. In order to keep the accuracy, the time step size is restricted. In combination with a fine spatial discretization the time step size may become very small which increases calculation times drastically even for small 1D problems. • The current version of OGS is not optimized for memory use and the MPI version of OGS does not distribute data between nodes. Even for moderately small 2D problems the number of MPI processes that fit into memory of up-to-date workstations or HPC hardware is limited. One strategy to overcome the above mentioned restrictions of OGS-GEMS is to parallelize the coupled code. For OGS a parallelized version already exists. It is based on a domain decomposition method implemented with MPI and provides a parallel solver for fluid and mass transport processes. In the coupled code, after solving fluid flow and solute transport, geochemical calculations are done in form of a central loop over all finite element nodes with calls to GEMS3K and consecutive calculations of changed material parameters. In a first step the existing MPI implementation was utilized to parallelize this loop. Calculations were split between the MPI processes and afterwards data was synchronized by using MPI communication routines. Furthermore, multi-threaded calculation of the loop was implemented with help of the boost thread library (http://www.boost.org). This implementation provides a flexible environment to distribute calculations between several threads. For each MPI process at least one and up to several dozens of worker threads are spawned. These threads do not replicate the complete OGS-GEM data structure and use only a limited amount of memory. Calculation of the central geochemical loop is shared between all threads. Synchronization between the threads is done by barrier commands. The overall number of local threads times MPI processes should match the number of available computing nodes. The combination of multi-threading and MPI provides an effective and flexible environment to speed up OGS-GEMS calculations while limiting the required memory use. Test calculations on different hardware show that for certain types of applications tremendous speedups are possible.
Fusible pellet transport and storage of heat
NASA Technical Reports Server (NTRS)
Bahrami, P. A.
1982-01-01
A new concept for both transport and storage of heat at high temperatures and heat fluxes is introduced and the first steps in analysis of its feasibility is taken. The concept utilizes the high energy storage capability of materials undergoing change of phase. The phase change material, for example a salt, is encapsulated in corrosion resistant sealed pellets and transported in a carrier fluid to heat source and storage. Calculations for heat transport from a typical solar collector indicate that the pellet mass flow rates are relatively small and that the required pumping power is only a small fraction of the energy transport capability of the system. Salts and eutectic salt mixtures as candidate phase change materials are examined and discussed. Finally, the time periods for melting or solidification of sodium chloride pellets is investigated and reported.
Fusible pellet transport and storage of heat
NASA Astrophysics Data System (ADS)
Bahrami, P. A.
1982-06-01
A new concept for both transport and storage of heat at high temperatures and heat fluxes is introduced and the first steps in analysis of its feasibility is taken. The concept utilizes the high energy storage capability of materials undergoing change of phase. The phase change material, for example a salt, is encapsulated in corrosion resistant sealed pellets and transported in a carrier fluid to heat source and storage. Calculations for heat transport from a typical solar collector indicate that the pellet mass flow rates are relatively small and that the required pumping power is only a small fraction of the energy transport capability of the system. Salts and eutectic salt mixtures as candidate phase change materials are examined and discussed. Finally, the time periods for melting or solidification of sodium chloride pellets is investigated and reported.
A mechanistic model of tau amyloid aggregation based on direct observation of oligomers
NASA Astrophysics Data System (ADS)
Shammas, Sarah L.; Garcia, Gonzalo A.; Kumar, Satish; Kjaergaard, Magnus; Horrocks, Mathew H.; Shivji, Nadia; Mandelkow, Eva; Knowles, Tuomas P. J.; Mandelkow, Eckhard; Klenerman, David
2015-04-01
Protein aggregation plays a key role in neurodegenerative disease, giving rise to small oligomers that may become cytotoxic to cells. The fundamental microscopic reactions taking place during aggregation, and their rate constants, have been difficult to determine due to lack of suitable methods to identify and follow the low concentration of oligomers over time. Here we use single-molecule fluorescence to study the aggregation of the repeat domain of tau (K18), and two mutant forms linked with familial frontotemporal dementia, the deletion mutant ΔK280 and the point mutant P301L. Our kinetic analysis reveals that aggregation proceeds via monomeric assembly into small oligomers, and a subsequent slow structural conversion step before fibril formation. Using this approach, we have been able to quantitatively determine how these mutations alter the aggregation energy landscape.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph
2017-04-01
Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-08
... product mix. That paragraph reads as follows: But assuming a situation in which there are substantial small cigar marketings in the actual ``small cigar'' tax category, changing the Step B method would...
Small Business Health Information Technology Financing Act
Rep. Dahlkemper, Kathleen A. [D-PA-3
2009-06-24
Senate - 11/19/2009 Received in the Senate and Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Small Business Microlending Expansion Act of 2009
Rep. Ellsworth, Brad [D-IN-8
2009-10-07
Senate - 11/09/2009 Received in the Senate and Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Small Business Financing and Investment Act of 2009
Rep. Schrader, Kurt [D-OR-5
2009-10-20
Senate - 11/02/2009 Received in the Senate and Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Improving Opportunities for Service-Disabled Veteran-Owned Small Businesses Act of 2013
Rep. Coffman, Mike [R-CO-6
2013-07-31
House - 12/11/2014 Reported (Amended) by the Committee on Small Business. H. Rept. 113-662, Part I. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Development Centers Modernization Act of 2009
Rep. Schock, Aaron [R-IL-18
2009-04-01
Senate - 11/09/2009 Received in the Senate and Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Small Business Early-Stage Investment Act of 2009
Rep. Nye, Glenn C. [D-VA-2
2009-10-07
Senate - 11/19/2009 Received in the Senate and Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Teaching Oscillations with a Small Computer.
ERIC Educational Resources Information Center
Calvo, J. L.; And Others
1983-01-01
Describes a simple, inexpensive electronic circuit used as a small analog computer in an experimental approach to the study of oscillations. Includes circuit diagram and an example of the method using steps followed by students studying underdamped oscillatory motion. (JN)
Pylorus preserving loop duodeno-enterostomy with sleeve gastrectomy - preliminary results
2014-01-01
Background Bariatric operations mostly combine a restrictive gastric component with a rerouting of the intestinal passage. The pylorus can thereby be alternatively preserved or excluded. With the aim of performing a “pylorus-preserving gastric bypass”, we present early results of a proximal postpyloric loop duodeno-jejunostomy associated with a sleeve gastrectomy (LSG) compared to results of a parallel, but distal LSG with a loop duodeno-ileostomy as a two-step procedure. Methods 16 patients underwent either a two-step LSG with a distal loop duodeno-ileostomy (DIOS) as revisional bariatric surgery or a combined single step operation with a proximal duodeno-jejunostomy (DJOS). Total small intestinal length was determined to account for inter-individual differences. Results Mean operative time for the second-step of the DIOS operation was 121 min and 147 min for the combined DJOS operation. The overall intestinal length was 750.8 cm (range 600-900 cm) with a bypassed limb length of 235.7 cm in DJOS patients. The mean length of the common channel in DIOS patients measured 245.6 cm. Overall excess weight loss (%EWL) of the two-step DIOS procedure came to 38.31% and 49.60%, DJOS patients experienced an %EWL of 19.75% and 46.53% at 1 and 6 months, resp. No complication related to the duodeno-enterostomy occurred. Conclusions Loop duodeno-enterosomies with sleeve gastrectomy can be safely performed and may open new alternatives in bariatric surgery with the possibility for inter-individual adaptation. PMID:24725654
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
Organic thin film transistor with a simplified planar structure
NASA Astrophysics Data System (ADS)
Zhang, Lei; Yu, Jungsheng; Zhong, Jian; Jiang, Yadong
2009-05-01
Organic thin film transistor (OTFT) with a simplified planar structure is described. The gate electrode and the source/drain electrodes of OTFT are processed in one planar structure. And these three electrodes are deposited on the glass substrate by DC sputtering technology using Cr/Ni target. Then the electrode layouts of different width length ratio are made by photolithography technology at the same time. Only one step of deposition and one step of photolithography is needed while conventional process takes at least two steps of deposition and two steps of photolithography. Metal is first prepared on the other side of glass substrate and electrode is formed by photolithography. Then source/drain electrode is prepared by deposition and photolithography on the side with the insulation layer. Compared to conventional process of OTFTs, the process in this work is simplified. After three electrodes prepared, the insulation layer is made by spin coating method. The organic material of polyimide is used as the insulation layer. A small molecular material of pentacene is evaporated on the insulation layer using vacuum deposition as the active layer. The process of OTFTs needs only three steps totally. A semi-auto probe stage is used to connect the three electrodes and the probe of the test instrument. A charge carrier mobility of 0.3 cm2 /V s, is obtained from OTFTs on glass substrates with and on/off current ratio of 105. The OTFTs with the planar structure using simplified process can simplify the device process and reduce the fabrication cost.
A computational kinetic model of diffusion for molecular systems.
Teo, Ivan; Schulten, Klaus
2013-09-28
Regulation of biomolecular transport in cells involves intra-protein steps like gating and passage through channels, but these steps are preceded by extra-protein steps, namely, diffusive approach and admittance of solutes. The extra-protein steps develop over a 10-100 nm length scale typically in a highly particular environment, characterized through the protein's geometry, surrounding electrostatic field, and location. In order to account for solute energetics and mobility of solutes in this environment at a relevant resolution, we propose a particle-based kinetic model of diffusion based on a Markov State Model framework. Prerequisite input data consist of diffusion coefficient and potential of mean force maps generated from extensive molecular dynamics simulations of proteins and their environment that sample multi-nanosecond durations. The suggested diffusion model can describe transport processes beyond microsecond duration, relevant for biological function and beyond the realm of molecular dynamics simulation. For this purpose the systems are represented by a discrete set of states specified by the positions, volumes, and surface elements of Voronoi grid cells distributed according to a density function resolving the often intricate relevant diffusion space. Validation tests carried out for generic diffusion spaces show that the model and the associated Brownian motion algorithm are viable over a large range of parameter values such as time step, diffusion coefficient, and grid density. A concrete application of the method is demonstrated for ion diffusion around and through the Eschericia coli mechanosensitive channel of small conductance ecMscS.
Adams, Marc A; Hurley, Jane C; Todd, Michael; Bhuiyan, Nishat; Jarrett, Catherine L; Tucker, Wesley J; Hollingshead, Kevin E; Angadi, Siddhartha S
2017-03-29
Emerging interventions that rely on and harness variability in behavior to adapt to individual performance over time may outperform interventions that prescribe static goals (e.g., 10,000 steps/day). The purpose of this factorial trial was to compare adaptive vs. static goal setting and immediate vs. delayed, non-contingent financial rewards for increasing free-living physical activity (PA). A 4-month 2 × 2 factorial randomized controlled trial tested main effects for goal setting (adaptive vs. static goals) and rewards (immediate vs. delayed) and interactions between factors to increase steps/day as measured by a Fitbit Zip. Moderate-to-vigorous PA (MVPA) minutes/day was examined as a secondary outcome. Participants (N = 96) were mainly female (77%), aged 41 ± 9.5 years, and all were insufficiently active and overweight/obese (mean BMI = 34.1 ± 6.2). Participants across all groups increased by 2389 steps/day on average from baseline to intervention phase (p < .001). Participants receiving static goals showed a stronger increase in steps per day from baseline phase to intervention phase (2630 steps/day) than those receiving adaptive goals (2149 steps/day; difference = 482 steps/day, p = .095). Participants receiving immediate rewards showed stronger improvement (2762 step/day increase) from baseline to intervention phase than those receiving delayed rewards (2016 steps/day increase; difference = 746 steps/day, p = .009). However, the adaptive goals group showed a slower decrease in steps/day from the beginning of the intervention phase to the end of the intervention phase (i.e. less than half the rate) compared to the static goals group (-7.7 steps vs. -18.3 steps each day; difference = 10.7 steps/day, p < .001) resulting in better improvements for the adaptive goals group by study end. Rate of change over the intervention phase did not differ between reward groups. Significant goal phase x goal setting x reward interactions were observed. Adaptive goals outperformed static goals (i.e., 10,000 steps) over a 4-month period. Small immediate rewards outperformed larger, delayed rewards. Adaptive goals with either immediate or delayed rewards should be preferred for promoting PA. ClinicalTrials.gov ID: NCT02053259 registered prospectively on January 31, 2014.
A Reference Unit on Home Vegetable Gardening.
ERIC Educational Resources Information Center
McCully, James S., Comp.; And Others
Designed to provide practical, up-to-date, basic information on home gardening for vocational agriculture students with only a limited knowledge of vegetable gardening, this reference unit includes step-by-step procedures for planning, planting, cultivating, harvesting, and processing vegetables in a small plot. Topics covered include plot…
NASA Astrophysics Data System (ADS)
Lee, Ji-Seok; Song, Ki-Won
2015-11-01
The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.
A diffusive information preservation method for small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2013-06-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
NASA Technical Reports Server (NTRS)
Abdallah, Ayman A.; Barnett, Alan R.; Ibrahim, Omar M.; Manella, Richard T.
1993-01-01
Within the MSC/NASTRAN DMAP (Direct Matrix Abstraction Program) module TRD1, solving physical (coupled) or modal (uncoupled) transient equations of motion is performed using the Newmark-Beta or mode superposition algorithms, respectively. For equations of motion with initial conditions, only the Newmark-Beta integration routine has been available in MSC/NASTRAN solution sequences for solving physical systems and in custom DMAP sequences or alters for solving modal systems. In some cases, one difficulty with using the Newmark-Beta method is that the process of selecting suitable integration time steps for obtaining acceptable results is lengthy. In addition, when very small step sizes are required, a large amount of time can be spent integrating the equations of motion. For certain aerospace applications, a significant time savings can be realized when the equations of motion are solved using an exact integration routine instead of the Newmark-Beta numerical algorithm. In order to solve modal equations of motion with initial conditions and take advantage of efficiencies gained when using uncoupled solution algorithms (like that within TRD1), an exact mode superposition method using MSC/NASTRAN DMAP has been developed and successfully implemented as an enhancement to an existing coupled loads methodology at the NASA Lewis Research Center.
McIntosh, Catherine; Dexter, Franklin; Epstein, Richard H
2006-12-01
In this tutorial, we consider the impact of operating room (OR) management on anesthesia group and OR labor productivity and costs. Most of the tutorial focuses on the steps required for each facility to refine its OR allocations using its own data collected during patient care. Data from a hospital in Australia are used throughout to illustrate the methods. OR allocation is a two-stage process. During the initial tactical stage of allocating OR time, OR capacity ("block time") is adjusted. For operational decision-making on a shorter-term basis, the existing workload can be considered fixed. Staffing is matched to that workload based on maximizing the efficiency of use of OR time. Scheduling cases and making decisions on the day of surgery to increase OR efficiency are worthwhile interventions to increase anesthesia group productivity. However, by far, the most important step is the appropriate refinement of OR allocations (i.e., planning service-specific staffing) 2-3 mo before the day of surgery. Reducing surgical and/or turnover times and delays in first-case-of-the-day starts generally provides small reductions in OR labor costs. Results vary widely because they are highly sensitive both to the OR allocations (i.e., staffing) and to the appropriateness of those OR allocations.
A Microglitch in the Millisecond Pulsar PSR B1821-24 in M28
NASA Astrophysics Data System (ADS)
Cognard, Ismaël; Backer, Donald C.
2004-09-01
We report on the observation of a very small glitch observed for the first time in a millisecond pulsar, PSR B1821-24, located in the globular cluster M28. Timing observations were mainly conducted with the Nançay radio telescope (France), and confirmation comes from the 140 ft radio telescope at Green Bank and the new Green Bank Telescope data. This event is characterized by a rotation frequency step of 3 nHz, or 10-11 in fractional frequency change, along with a short duration limited to a few days or a week. A marginally significant frequency derivative step was also found. This glitch follows the main characteristics of those in the slow-period pulsars but is 2 orders of magnitude smaller than the smallest ever recorded. Such an event must be very rare for millisecond pulsars since no other glitches have been detected when the cumulated number of years of millisecond pulsar timing observations up to 2001 is around 500 for all these objects. However, pulsar PSR B1821-24 is one of the youngest among the old recycled ones, and there is likely a correlation between age, or a related parameter, and timing noise. While this event happens on a much smaller scale, the required adjustment of the star to a new equilibrium figure as it spins down is a likely common cause for all glitches.
Leipert, Jan; Treitz, Christian; Leippe, Matthias; Tholey, Andreas
2017-12-01
N-acyl homoserine lactones (AHL) are small signal molecules involved in the quorum sensing of many gram-negative bacteria, and play an important role in biofilm formation and pathogenesis. Present analytical methods for identification and quantification of AHL require time-consuming sample preparation steps and are hampered by the lack of appropriate standards. By aiming at a fast and straightforward method for AHL analytics, we investigated the applicability of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Suitable MALDI matrices, including crystalline and ionic liquid matrices, were tested and the fragmentation of different AHL in collision-induced dissociation MS/MS was studied, providing information about characteristic marker fragments ions. Employing small-scale synthesis protocols, we established a versatile and cost-efficient procedure for fast generation of isotope-labeled AHL standards, which can be used without extensive purification and yielded accurate standard curves. Quantitative analysis was possible in the low pico-molar range, with lower limits of quantification reaching from 1 to 5 pmol for different AHL. The developed methodology was successfully applied in a quantitative MALDI MS analysis of low-volume culture supernatants of Pseudomonas aeruginosa. Graphical abstract ᅟ.
NASA Astrophysics Data System (ADS)
Leipert, Jan; Treitz, Christian; Leippe, Matthias; Tholey, Andreas
2017-12-01
N-acyl homoserine lactones (AHL) are small signal molecules involved in the quorum sensing of many gram-negative bacteria, and play an important role in biofilm formation and pathogenesis. Present analytical methods for identification and quantification of AHL require time-consuming sample preparation steps and are hampered by the lack of appropriate standards. By aiming at a fast and straightforward method for AHL analytics, we investigated the applicability of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Suitable MALDI matrices, including crystalline and ionic liquid matrices, were tested and the fragmentation of different AHL in collision-induced dissociation MS/MS was studied, providing information about characteristic marker fragments ions. Employing small-scale synthesis protocols, we established a versatile and cost-efficient procedure for fast generation of isotope-labeled AHL standards, which can be used without extensive purification and yielded accurate standard curves. Quantitative analysis was possible in the low pico-molar range, with lower limits of quantification reaching from 1 to 5 pmol for different AHL. The developed methodology was successfully applied in a quantitative MALDI MS analysis of low-volume culture supernatants of Pseudomonas aeruginosa. [Figure not available: see fulltext.
An adaptive time-stepping strategy for solving the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less
Maximum Entropy Method applied to Real-time Time-Dependent Density Functional Theory
NASA Astrophysics Data System (ADS)
Zempo, Yasunari; Toogoshi, Mitsuki; Kano, Satoru S.
Maximum Entropy Method (MEM) is widely used for the analysis of a time-series data such as an earthquake, which has fairly long-periodicity but short observable data. We have examined MEM to apply to the optical analysis of the time-series data from the real-time TDDFT. In the analysis, usually Fourier Transform (FT) is used, and we have to pay our attention to the lower energy part such as the band gap, which requires the long time evolution. The computational cost naturally becomes quite expensive. Since MEM is based on the autocorrelation of the signal, in which the periodicity can be described as the difference of time-lags, its value in the lower energy naturally gets small compared to that in the higher energy. To improve the difficulty, our MEM has the two features: the raw data is repeated it many times and concatenated, which provides the lower energy resolution in high resolution; together with the repeated data, an appropriate phase for the target frequency is introduced to reduce the side effect of the artificial periodicity. We have compared our improved MEM and FT spectrum using small-to-medium size molecules. We can see the clear spectrum of MEM, compared to that of FT. Our new technique provides higher resolution in fewer steps, compared to that of FT. This work was partially supported by JSPS Grants-in-Aid for Scientific Research (C) Grant number 16K05047, Sumitomo Chemical, Co. Ltd., and Simulatio Corp.
Shear-rate dependence of the viscosity of the Lennard-Jones liquid at the triple point
NASA Astrophysics Data System (ADS)
Ferrario, M.; Ciccotti, G.; Holian, B. L.; Ryckaert, J. P.
1991-11-01
High-precision molecular-dynamics (MD) data are reported for the shear viscosity η of the Lennard-Jones liquid at its triple point, as a function of the shear rate ɛ˙ for a large system (N=2048). The Green-Kubo (GK) value η(ɛ˙=0)=3.24+/-0.04 is estimated from a run of 3.6×106 steps (40 nsec). We find no numerical evidence of a t-3/2 long-time tail for the GK integrand (stress-stress time-correlation function). From our nonequilibrium MD results, obtained both at small and large values of ɛ˙, a consistent picture emerges that supports an analytical (quadratic at low shear rate) dependence of the viscosity on ɛ˙.
Spin-Precession Organic Magnetic Sensor
2012-06-01
magnetically— a new half-metal CFAS that has desirable properties for use at room temperature; (2) fabricated several nonlocal devices with CFAS and polymer...400 600 800 1000 1200 0 200 400 600 800 Temperature ( C) M s (e m u /c c) One-Step Two-Step Figure 2: Magnetic properties of CFAS layers measured...temperature-independent for the two-step process. We also measured the transport properties of CFAS layers. The electrical resistivity is small (~60
A 10-step safety management framework for construction small and medium-sized enterprises.
Gunduz, Murat; Laitinen, Heikki
2017-09-01
It is of great importance to develop an occupational health and safety management system (OHS MS) to form a systemized approach to improve health and safety. It is a known fact that thousands of accidents and injuries occur in the construction industry. Most of these accidents occur in small and medium-sized enterprises (SMEs). This article provides a 10-step user-friendly OHS MS for the construction industry. A quantitative OHS MS indexing method is also introduced in the article. The practical application of the system to real SMEs and its promising results are also presented.
Small self-contained payload overview. [Space Shuttle Getaway Special project management
NASA Technical Reports Server (NTRS)
Miller, D. S.
1981-01-01
The low-cost Small Self-Contained Payload Program, also known as the Getaway Special, initiated by NASA for providing a stepping stone to larger scientific and manufacturing payloads, is presented. The steps of 'getting on board,' the conditions of use, the reimbursement policy and the procedures, and the flight scheduling mechanism for flying the Getaway Special payload are given. The terms and conditions, and the interfaces between NASA and the users for entering into an agreement with NASA for launch and associated services are described, as are the philosophy and the rationale for establishing the policy and the procedures.
NASA Astrophysics Data System (ADS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.
2009-09-01
The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.
A bill to repeal the American Recovery Capital loan program of the Small Business Administration.
Sen. Snowe, Olympia J. [R-ME
2009-11-16
Senate - 11/16/2009 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Small Business Disaster Readiness and Reform Act of 2009
Rep. Griffith, Parker [D-AL-5
2009-10-07
Senate - 11/09/2009 Received in the Senate and Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Small Steps towards Student-Centred Learning
ERIC Educational Resources Information Center
Jacobs, George M.; Toh-Heng, Hwee Leng
2013-01-01
Student centred learning classroom practices are contrasted with those in teacher centred learning classrooms. The discussion focuses on the theoretical underpinnings of the former, and provides nine steps and tips on how to implement student centred learning strategies, with the aim of developing the 21st century skills of self-directed and…
Marketing Research. Instructor's Manual.
ERIC Educational Resources Information Center
Small Business Administration, Washington, DC.
Prepared for the Administrative Management Course Program, this instructor's manual was developed to serve small-business management needs. The sections of the manual are as follows: (1) Lesson Plan--an outline of material covered, which may be used as a teaching guide, presented in two columns: the presentation, and a step-by-step indication of…
Microcomputers in Transit: A Needs Assessment and Implementation Handbook. Final Report.
ERIC Educational Resources Information Center
Wyatt, Eve; Smerk, George
This handbook describes a practical step-by-step process for introducing microcomputers to small- and medium-sized transit operating agencies. The introductory chapter deals with the objective of buying a microcomputer system, the characteristics of microcomputers, microcomputer software, microcomputer system components, and issues faced in…
Nutt, John G.; Horak, Fay B.
2011-01-01
Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Currie, Fredrik; Jarvoll, Patrik; Holmberg, Krister; Romsted, Laurence S; Gunaseelan, Krishnan
2007-08-15
High field (800 MHz) (1)H NMR was used to monitor the two-step consecutive reaction of excess SO(3)(2-) with symmetrical bifunctional alpha,omega-dibromoalkanes with butane (DBB), hexane (DBH), octane (DBO), and decane (DBD) chains in CTAB micelles at 25 degrees C. The first-order rate constant for the first substitution step for DBB and DBH is about 5 times faster than for the second, but the kinetics for DBO and DBD were not cleanly first-order. After 40 min, the solution contained about 80% of the intermediate bromoalkanesulfonate from DBB and DBH and the remainder is alkanedisulfonate and unreacted starting material. The same reactions were carried out in homogeneous MeOH/D(2)O solutions at 50 degrees C. The rate constants for all four alpha,omega-dibromoalkanes were first-order throughout the time course of the reaction and the same within +/-10%. However, because micellar solutions are organized on the nanoscale and bring together lipophilic and hydrophilic reactants into a small reaction volume at the micellar interface, they speed this substitution reaction considerably compared to reaction in MeOH/D(2)O. The CTAB micelles also induce a significant regioselectivity in product formation by speeding the first step of the consecutive reaction more than the second. The results are consistent with the bromoalkanesulfonate intermediates having a radial orientation within the micelles with the -CH(2)SO(3)(-) group in the interfacial region and the -CH(2)Br group directed into the micellar core such that the concentration of -CH(2)Br groups in the reactive zone, i.e., the micellar interface, is significantly reduced. These results provide the first example of self-assembled surfactant system altering the relative rates of the reaction steps of a consecutive reaction and, in doing so, enhancing monosubstitution of a symmetrically disubstituted species.
Melzer, Itshak; Goldring, Melissa; Melzer, Yehudit; Green, Elad; Tzedek, Irit
2010-12-01
If balance is lost, quick step execution can prevent falls. Research has shown that speed of voluntary stepping was able to predict future falls in old adults. The aim of the study was to investigate voluntary stepping behavior, as well as to compare timing and leg push-off force-time relation parameters of involved and uninvolved legs in stroke survivors during single- and dual-task conditions. We also aimed to compare timing and leg push-off force-time relation parameters between stroke survivors and healthy individuals in both task conditions. Ten stroke survivors performed a voluntary step execution test with their involved and uninvolved legs under two conditions: while focusing only on the stepping task and while a separate attention-demanding task was performed simultaneously. Temporal parameters related to the step time were measured including the duration of the step initiation phase, the preparatory phase, the swing phase, and the total step time. In addition, force-time parameters representing the push-off power during stepping were calculated from ground reaction data and compared with 10 healthy controls. The involved legs of stroke survivors had a significantly slower stepping time than uninvolved legs due to increased swing phase duration during both single- and dual-task conditions. For dual compared to single task, the stepping time increased significantly due to a significant increase in the duration of step initiation. In general, the force time parameters were significantly different in both legs of stroke survivors as compared to healthy controls, with no significant effect of dual compared with single-task conditions in both groups. The inability of stroke survivors to swing the involved leg quickly may be the most significant factor contributing to the large number of falls to the paretic side. The results suggest that stroke survivors were unable to rapidly produce muscle force in fast actions. This may be the mechanism of delayed execution of a fast step when balance is lost, thus increasing the likelihood of falls in stroke survivors. Copyright © 2010 Elsevier Ltd. All rights reserved.
A small Unix-based data acquisition system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engberg, D.; Glanzman, T.
1994-02-01
The proposed SLAC B Factory detector plans to use Unix-based machines for all aspects of computing, including real-time data acquisition and experimental control. A R and D program has been established to investigate the use of Unix in the various aspects of experimental computation. Earlier R and D work investigated the basic real-time aspects of the IBM RS/6000 workstation running AIX, which claims to be a real-time operating system. The next step in this R and D is the construction of a prototype data acquisition system which attempts to exercise many of the features needed in the final on-line systemmore » in a realistic situation. For this project, the authors have combined efforts with a team studying the use of novel cell designs and gas mixtures in a new prototype drift chamber.« less
Following in Real Time the Two-Step Assembly of Nanoparticles into Mesocrystals in Levitating Drops.
Agthe, Michael; Plivelic, Tomás S; Labrador, Ana; Bergström, Lennart; Salazar-Alvarez, German
2016-11-09
Mesocrystals composed of crystallographically aligned nanocrystals are present in biominerals and assembled materials which show strongly directional properties of importance for mechanical protection and functional devices. Mesocrystals are commonly formed by complex biomineralization processes and can also be generated by assembly of anisotropic nanocrystals. Here, we follow the evaporation-induced assembly of maghemite nanocubes into mesocrystals in real time in levitating drops. Analysis of time-resolved small-angle X-ray scattering data and ex situ scanning electron microscopy together with interparticle potential calculations show that the substrate-free, particle-mediated crystallization process proceeds in two stages involving the formation and rapid transformation of a dense, structurally disordered phase into ordered mesocrystals. Controlling and tailoring the particle-mediated formation of mesocrystals could be utilized to assemble designed nanoparticles into new materials with unique functions.
Repliscan: a tool for classifying replication timing regions.
Zynda, Gregory J; Song, Jawon; Concia, Lorenzo; Wear, Emily E; Hanley-Bowdoin, Linda; Thompson, William F; Vaughn, Matthew W
2017-08-07
Replication timing experiments that use label incorporation and high throughput sequencing produce peaked data similar to ChIP-Seq experiments. However, the differences in experimental design, coverage density, and possible results make traditional ChIP-Seq analysis methods inappropriate for use with replication timing. To accurately detect and classify regions of replication across the genome, we present Repliscan. Repliscan robustly normalizes, automatically removes outlying and uninformative data points, and classifies Repli-seq signals into discrete combinations of replication signatures. The quality control steps and self-fitting methods make Repliscan generally applicable and more robust than previous methods that classify regions based on thresholds. Repliscan is simple and effective to use on organisms with different genome sizes. Even with analysis window sizes as small as 1 kilobase, reliable profiles can be generated with as little as 2.4x coverage.
High-Order Space-Time Methods for Conservation Laws
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2013-01-01
Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown
The Strong Lensing Time Delay Challenge (2014)
NASA Astrophysics Data System (ADS)
Liao, Kai; Dobler, G.; Fassnacht, C. D.; Treu, T.; Marshall, P. J.; Rumbaugh, N.; Linder, E.; Hojjati, A.
2014-01-01
Time delays between multiple images in strong lensing systems are a powerful probe of cosmology. At the moment the application of this technique is limited by the number of lensed quasars with measured time delays. However, the number of such systems is expected to increase dramatically in the next few years. Hundred such systems are expected within this decade, while the Large Synoptic Survey Telescope (LSST) is expected to deliver of order 1000 time delays in the 2020 decade. In order to exploit this bounty of lenses we needed to make sure the time delay determination algorithms have sufficiently high precision and accuracy. As a first step to test current algorithms and identify potential areas for improvement we have started a "Time Delay Challenge" (TDC). An "evil" team has created realistic simulated light curves, to be analyzed blindly by "good" teams. The challenge is open to all interested parties. The initial challenge consists of two steps (TDC0 and TDC1). TDC0 consists of a small number of datasets to be used as a training template. The non-mandatory deadline is December 1 2013. The "good" teams that complete TDC0 will be given access to TDC1. TDC1 consists of thousands of lightcurves, a number sufficient to test precision and accuracy at the subpercent level, necessary for time-delay cosmography. The deadline for responding to TDC1 is July 1 2014. Submissions will be analyzed and compared in terms of predefined metrics to establish the goodness-of-fit, efficiency, precision and accuracy of current algorithms. This poster describes the challenge in detail and gives instructions for participation.
Productivity improvement through cycle time analysis
NASA Astrophysics Data System (ADS)
Bonal, Javier; Rios, Luis; Ortega, Carlos; Aparicio, Santiago; Fernandez, Manuel; Rosendo, Maria; Sanchez, Alejandro; Malvar, Sergio
1996-09-01
A cycle time (CT) reduction methodology has been developed in the Lucent Technology facility (former AT&T) in Madrid, Spain. It is based on a comparison of the contribution of each process step in each technology with a target generated by a cycle time model. These targeted cycle times are obtained using capacity data of the machines processing those steps, queuing theory and theory of constrains (TOC) principles (buffers to protect bottleneck and low cycle time/inventory everywhere else). Overall efficiency equipment (OEE) like analysis is done in the machine groups with major differences between their target cycle time and real values. Comparisons between the current value of the parameters that command their capacity (process times, availability, idles, reworks, etc.) and the engineering standards are done to detect the cause of exceeding their contribution to the cycle time. Several friendly and graphical tools have been developed to track and analyze those capacity parameters. Specially important have showed to be two tools: ASAP (analysis of scheduling, arrivals and performance) and performer which analyzes interrelation problems among machines procedures and direct labor. The performer is designed for a detailed and daily analysis of an isolate machine. The extensive use of this tool by the whole labor force has demonstrated impressive results in the elimination of multiple small inefficiencies with a direct positive implications on OEE. As for ASAP, it shows the lot in process/queue for different machines at the same time. ASAP is a powerful tool to analyze the product flow management and the assigned capacity for interdependent operations like the cleaning and the oxidation/diffusion. Additional tools have been developed to track, analyze and improve the process times and the availability.
GBAS Ionospheric Anomaly Monitoring Based on a Two-Step Approach
Zhao, Lin; Yang, Fuxin; Li, Liang; Ding, Jicheng; Zhao, Yuxin
2016-01-01
As one significant component of space environmental weather, the ionosphere has to be monitored using Global Positioning System (GPS) receivers for the Ground-Based Augmentation System (GBAS). This is because an ionospheric anomaly can pose a potential threat for GBAS to support safety-critical services. The traditional code-carrier divergence (CCD) methods, which have been widely used to detect the variants of the ionospheric gradient for GBAS, adopt a linear time-invariant low-pass filter to suppress the effect of high frequency noise on the detection of the ionospheric anomaly. However, there is a counterbalance between response time and estimation accuracy due to the fixed time constants. In order to release the limitation, a two-step approach (TSA) is proposed by integrating the cascaded linear time-invariant low-pass filters with the adaptive Kalman filter to detect the ionospheric gradient anomaly. The performance of the proposed method is tested by using simulated and real-world data, respectively. The simulation results show that the TSA can detect ionospheric gradient anomalies quickly, even when the noise is severer. Compared to the traditional CCD methods, the experiments from real-world GPS data indicate that the average estimation accuracy of the ionospheric gradient improves by more than 31.3%, and the average response time to the ionospheric gradient at a rate of 0.018 m/s improves by more than 59.3%, which demonstrates the ability of TSA to detect a small ionospheric gradient more rapidly. PMID:27240367
Resolving discrete pulsar spin-down states with current and future instrumentation
NASA Astrophysics Data System (ADS)
Shaw, B.; Stappers, B. W.; Weltevrede, P.
2018-04-01
An understanding of pulsar timing noise offers the potential to improve the timing precision of a large number of pulsars as well as facilitating our understanding of pulsar magnetospheres. For some sources, timing noise is attributable to a pulsar switching between two different spin-down rates (\\dot{ν }). Such transitions may be common but difficult to resolve using current techniques. In this work, we use simulations of \\dot{ν }-variable pulsars to investigate the likelihood of resolving individual \\dot{ν } transitions. We inject step changes in the value of \\dot{ν } with a wide range of amplitudes and switching time-scales. We then attempt to redetect these transitions using standard pulsar timing techniques. The pulse arrival-time precision and the observing cadence are varied. Limits on \\dot{ν } detectability based on the effects such transitions have on the timing residuals are derived. With the typical cadences and timing precision of current timing programmes, we find that we are insensitive to a large region of Δ \\dot{ν } parameter space that encompasses small, short time-scale switches. We find, where the rotation and emission states are correlated, that using changes to the pulse shape to estimate \\dot{ν } transition epochs can improve detectability in certain scenarios. The effects of cadence on Δ \\dot{ν } detectability are discussed, and we make comparisons with a known population of intermittent and mode-switching pulsars. We conclude that for short time-scale, small switches, cadence should not be compromised when new generations of ultra-sensitive radio telescopes are online.
Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik
Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less
Puso, M. A.; Kokko, E.; Settgast, R.; ...
2014-10-22
An embedded mesh method using piecewise constant multipliers originally proposed by Puso et al. (CMAME, 2012) is analyzed here to determine effects of the pressure stabilization term and small cut cells. The approach is implemented for transient dynamics using the central difference scheme for the time discretization. It is shown that the resulting equations of motion are a stable linear system with a condition number independent of mesh size. Furthermore, we show that the constraints and the stabilization terms can be recast as non-proportional damping such that the time integration of the scheme is provably stable with a critical timemore » step computed from the undamped equations of motion. Effects of small cuts are discussed throughout the presentation. A mesh study is conducted to evaluate the effects of the stabilization on the discretization error and conditioning and is used to recommend an optimal value for stabilization scaling parameter. Several nonlinear problems are also analyzed and compared with comparable conforming mesh results. Finally, we show several demanding problems highlighting the robustness of the proposed approach.« less
Output-Sensitive Construction of Reeb Graphs.
Doraiswamy, H; Natarajan, V
2012-01-01
The Reeb graph of a scalar function represents the evolution of the topology of its level sets. This paper describes a near-optimal output-sensitive algorithm for computing the Reeb graph of scalar functions defined over manifolds or non-manifolds in any dimension. Key to the simplicity and efficiency of the algorithm is an alternate definition of the Reeb graph that considers equivalence classes of level sets instead of individual level sets. The algorithm works in two steps. The first step locates all critical points of the function in the domain. Critical points correspond to nodes in the Reeb graph. Arcs connecting the nodes are computed in the second step by a simple search procedure that works on a small subset of the domain that corresponds to a pair of critical points. The paper also describes a scheme for controlled simplification of the Reeb graph and two different graph layout schemes that help in the effective presentation of Reeb graphs for visual analysis of scalar fields. Finally, the Reeb graph is employed in four different applications-surface segmentation, spatially-aware transfer function design, visualization of interval volumes, and interactive exploration of time-varying data.
Chung, C K; Zhou, R X; Liu, T Y; Chang, W T
2009-02-04
Most porous anodic alumina (PAA) or anodic aluminum oxide (AAO) films are fabricated using the potentiostatic method from high-purity (99.999%) aluminum films at a low temperature of approximately 0-10 degrees C to avoid dissolution effects at room temperature (RT). In this study, we have demonstrated the fabrication of PAA film from commercial purity (99%) aluminum at RT using a hybrid pulse technique which combines pulse reverse and pulse voltages for the two-step anodization. The reaction mechanism is investigated by the real-time monitoring of current. A possible mechanism of hybrid pulse anodization is proposed for the formation of pronounced nanoporous film at RT. The structure and morphology of the anodic films were greatly influenced by the duration of anodization and the type of voltage. The best result was obtained by first applying pulse reverse voltage and then pulse voltage. The first pulse reverse anodization step was used to form new small cells and pre-texture concave aluminum as a self-assembled mask while the second pulse anodization step was for the resulting PAA film. The diameter of the nanopores in the arrays could reach 30-60 nm.
Systematic development of technical textiles
NASA Astrophysics Data System (ADS)
Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.
2016-07-01
Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.