Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems
NASA Astrophysics Data System (ADS)
Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo
With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.
Dependence of Hurricane intensity and structures on vertical resolution and time-step size
NASA Astrophysics Data System (ADS)
Zhang, Da-Lin; Wang, Xiaoxue
2003-09-01
In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.
NASA Astrophysics Data System (ADS)
Amalia, E.; Moelyadi, M. A.; Ihsan, M.
2018-04-01
The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
Finite-difference modeling with variable grid-size and adaptive time-step in porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Wu, Guochen
2014-04-01
Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.
NASA Astrophysics Data System (ADS)
Wang, Zhan-zhi; Xiong, Ying
2013-04-01
A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.
Finite Memory Walk and Its Application to Small-World Network
NASA Astrophysics Data System (ADS)
Oshima, Hiraku; Odagaki, Takashi
2012-07-01
In order to investigate the effects of cycles on the dynamical process on both regular lattices and complex networks, we introduce a finite memory walk (FMW) as an extension of the simple random walk (SRW), in which a walker is prohibited from moving to sites visited during m steps just before the current position. This walk interpolates the simple random walk (SRW), which has no memory (m = 0), and the self-avoiding walk (SAW), which has an infinite memory (m = ∞). We investigate the FMW on regular lattices and clarify the fundamental characteristics of the walk. We find that (1) the mean-square displacement (MSD) of the FMW shows a crossover from the SAW at a short time step to the SRW at a long time step, and the crossover time is approximately equivalent to the number of steps remembered, and that the MSD can be rescaled in terms of the time step and the size of memory; (2) the mean first-return time (MFRT) of the FMW changes significantly at the number of remembered steps that corresponds to the size of the smallest cycle in the regular lattice, where ``smallest'' indicates that the size of the cycle is the smallest in the network; (3) the relaxation time of the first-return time distribution (FRTD) decreases as the number of cycles increases. We also investigate the FMW on the Watts--Strogatz networks that can generate small-world networks, and show that the clustering coefficient of the Watts--Strogatz network is strongly related to the MFRT of the FMW that can remember two steps.
Short-term Time Step Convergence in a Climate Model
Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...
2015-02-11
A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
Effect of reaction-step-size noise on the switching dynamics of stochastic populations
NASA Astrophysics Data System (ADS)
Be'er, Shay; Heller-Algazi, Metar; Assaf, Michael
2016-05-01
In genetic circuits, when the messenger RNA lifetime is short compared to the cell cycle, proteins are produced in geometrically distributed bursts, which greatly affects the cellular switching dynamics between different metastable phenotypic states. Motivated by this scenario, we study a general problem of switching or escape in stochastic populations, where influx of particles occurs in groups or bursts, sampled from an arbitrary distribution. The fact that the step size of the influx reaction is a priori unknown and, in general, may fluctuate in time with a given correlation time and statistics, introduces an additional nondemographic reaction-step-size noise into the system. Employing the probability-generating function technique in conjunction with Hamiltonian formulation, we are able to map the problem in the leading order onto solving a stationary Hamilton-Jacobi equation. We show that compared to the "usual case" of single-step influx, bursty influx exponentially decreases the population's mean escape time from its long-lived metastable state. In particular, close to bifurcation we find a simple analytical expression for the mean escape time which solely depends on the mean and variance of the burst-size distribution. Our results are demonstrated on several realistic distributions and compare well with numerical Monte Carlo simulations.
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less
Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong
2015-02-01
The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.
NASA Technical Reports Server (NTRS)
Majda, G.
1985-01-01
A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.
Meyers, Robert W; Oliver, Jon L; Hughes, Michael G; Lloyd, Rhodri S; Cronin, John B
2017-04-01
Meyers, RW, Oliver, JL, Hughes, MG, Lloyd, RS, and Cronin, JB. Influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. J Strength Cond Res 31(4): 1009-1016, 2017-The aim of this study was to investigate the influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. Three-hundred and seventy-five boys (age: 13.0 ± 1.3 years) completed a 30-m sprint test, during which maximal speed, step length, step frequency, contact time, and flight time were recorded using an optical measurement system. Body mass, height, leg length, and a maturity offset represented somatic variables. Step frequency accounted for the highest proportion of variance in speed (∼58%) in the pre-peak height velocity (pre-PHV) group, whereas step length explained the majority of the variance in speed (∼54%) in the post-PHV group. In the pre-PHV group, mass was negatively related to speed, step length, step frequency, and contact time; however, measures of stature had a positive influence on speed and step length yet a negative influence on step frequency. Speed and step length were also negatively influence by mass in the post-PHV group, whereas leg length continued to positively influence step length. The results highlighted that pre-PHV boys may be deemed step frequency reliant, whereas those post-PHV boys may be marginally step length reliant. Furthermore, the negative influence of body mass, both pre-PHV and post-PHV, suggests that training to optimize sprint performance in youth should include methods such as plyometric and strength training, where a high neuromuscular focus and the development force production relative to body weight are key foci.
Adaptive time stepping for fluid-structure interaction solvers
Mayr, M.; Wall, W. A.; Gee, M. W.
2017-12-22
In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less
Adaptive time stepping for fluid-structure interaction solvers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayr, M.; Wall, W. A.; Gee, M. W.
In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Darian, Armen; Sindir, Munir
1992-01-01
We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).
Optimal Padding for the Two-Dimensional Fast Fourier Transform
NASA Technical Reports Server (NTRS)
Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.
2011-01-01
One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...
2018-04-17
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models
NASA Astrophysics Data System (ADS)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.
2018-04-01
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Performance analysis and kernel size study of the Lynx real-time operating system
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.
1993-01-01
This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
Multiscaling properties of coastal waters particle size distribution from LISST in situ measurements
NASA Astrophysics Data System (ADS)
Pannimpullath Remanan, R.; Schmitt, F. G.; Loisel, H.; Mériaux, X.
2013-12-01
An eulerian high frequency sampling of particle size distribution (PSD) is performed during 5 tidal cycles (65 hours) in a coastal environment of the eastern English Channel at 1 Hz. The particle data are recorded using a LISST-100x type C (Laser In Situ Scattering and Transmissometry, Sequoia Scientific), recording volume concentrations of particles having diameters ranging from 2.5 to 500 mu in 32 size classes in logarithmic scale. This enables the estimation at each time step (every second) of the probability density function of particle sizes. At every time step, the pdf of PSD is hyperbolic. We can thus estimate PSD slope time series. Power spectral analysis shows that the mean diameter of the suspended particles is scaling at high frequencies (from 1s to 1000s). The scaling properties of particle sizes is studied by computing the moment function, from the pdf of the size distribution. Moment functions at many different time scales (from 1s to 1000 s) are computed and their scaling properties considered. The Shannon entropy at each time scale is also estimated and is related to other parameters. The multiscaling properties of the turbidity (coefficient cp computed from the LISST) are also consider on the same time scales, using Empirical Mode Decomposition.
Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin
2017-01-01
Myosin motors in cardiac ventriculum convert ATP free energy to the work of moving blood volume under pressure. The actin bound motor cyclically rotates its lever-arm/light-chain complex linking motor generated torque to the myosin filament backbone and translating actin against resisting force. Previous research showed that the unloaded in vitro motor is described with high precision by single molecule mechanical characteristics including unitary step-sizes of approximately 3, 5, and 8 nm and their relative step-frequencies of approximately 13, 50, and 37%. The 3 and 8 nm unitary step-sizes are dependent on myosin essential light chain (ELC) N-terminus actin binding. Step-size and step-frequency quantitation specifies in vitro motor function including duty-ratio, power, and strain sensitivity metrics. In vivo, motors integrated into the muscle sarcomere form the more complex and hierarchically functioning muscle machine. The goal of the research reported here is to measure single myosin step-size and step-frequency in vivo to assess how tissue integration impacts motor function. A photoactivatable GFP tags the ventriculum myosin lever-arm/light-chain complex in the beating heart of a live zebrafish embryo. Detected single GFP emission reports time-resolved myosin lever-arm orientation interpreted as step-size and step-frequency providing single myosin mechanical characteristics over the active cycle. Following step-frequency of cardiac ventriculum myosin transitioning from low to high force in relaxed to auxotonic to isometric contraction phases indicates that the imposition of resisting force during contraction causes the motor to down-shift to the 3 nm step-size accounting for >80% of all the steps in the near-isometric phase. At peak force, the ATP initiated actomyosin dissociation is the predominant strain inhibited transition in the native myosin contraction cycle. The proposed model for motor down-shifting and strain sensing involves ELC N-terminus actin binding. Overall, the approach is a unique bottom-up single molecule mechanical characterization of a hierarchically functional native muscle myosin. PMID:28423017
Simulation methods with extended stability for stiff biochemical Kinetics.
Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin
2010-08-11
With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
An improved VSS NLMS algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan
2017-08-01
In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
[A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].
Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki
2016-03-01
The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.
Unstable vicinal crystal growth from cellular automata
NASA Astrophysics Data System (ADS)
Krasteva, A.; Popova, H.; KrzyŻewski, F.; Załuska-Kotur, M.; Tonchev, V.
2016-03-01
In order to study the unstable step motion on vicinal crystal surfaces we devise vicinal Cellular Automata. Each cell from the colony has value equal to its height in the vicinal, initially the steps are regularly distributed. Another array keeps the adatoms, initially distributed randomly over the surface. The growth rule defines that each adatom at right nearest neighbor position to a (multi-) step attaches to it. The update of whole colony is performed at once and then time increases. This execution of the growth rule is followed by compensation of the consumed particles and by diffusional update(s) of the adatom population. Two principal sources of instability are employed - biased diffusion and infinite inverse Ehrlich-Schwoebel barrier (iiSE). Since these factors are not opposed by step-step repulsion the formation of multi-steps is observed but in general the step bunches preserve a finite width. We monitor the developing surface patterns and quantify the observations by scaling laws with focus on the eventual transition from diffusion-limited to kinetics-limited phenomenon. The time-scaling exponent of the bunch size N is 1/2 for the case of biased diffusion and 1/3 for the case of iiSE. Additional distinction is possible based on the time-scaling exponents of the sizes of multi-step Nmulti, these are 0.36÷0.4 (for biased diffusion) and 1/4 (iiSE).
NASA Astrophysics Data System (ADS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.
2009-09-01
The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.
Diffractive optics fabricated by direct write methods with an electron beam
NASA Technical Reports Server (NTRS)
Kress, Bernard; Zaleta, David; Daschner, Walter; Urquhart, Kris; Stein, Robert; Lee, Sing H.
1993-01-01
State-of-the-art diffractive optics are fabricated using e-beam lithography and dry etching techniques to achieve multilevel phase elements with very high diffraction efficiencies. One of the major challenges encountered in fabricating diffractive optics is the small feature size (e.g. for diffractive lenses with small f-number). It is not only the e-beam system which dictates the feature size limitations, but also the alignment systems (mask aligner) and the materials (e-beam and photo resists). In order to allow diffractive optics to be used in new optoelectronic systems, it is necessary not only to fabricate elements with small feature sizes but also to do so in an economical fashion. Since price of a multilevel diffractive optical element is closely related to the e-beam writing time and the number of etching steps, we need to decrease the writing time and etching steps without affecting the quality of the element. To do this one has to utilize the full potentials of the e-beam writing system. In this paper, we will present three diffractive optics fabrication techniques which will reduce the number of process steps, the writing time, and the overall fabrication time for multilevel phase diffractive optics.
Personal computer study of finite-difference methods for the transonic small disturbance equation
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1989-01-01
Calculation of unsteady flow phenomena requires careful attention to the numerical treatment of the governing partial differential equations. The personal computer provides a convenient and useful tool for the development of meshes, algorithms, and boundary conditions needed to provide time accurate solution of these equations. The one-dimensional equation considered provides a suitable model for the study of wave propagation in the equations of transonic small disturbance potential flow. Numerical results for effects of mesh size, extent, and stretching, time step size, and choice of far-field boundary conditions are presented. Analysis of the discretized model problem supports these numerical results. Guidelines for suitable mesh and time step choices are given.
NASA Technical Reports Server (NTRS)
Majda, George
1986-01-01
One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.
Initial condition of stochastic self-assembly
NASA Astrophysics Data System (ADS)
Davis, Jason K.; Sindi, Suzanne S.
2016-02-01
The formation of a stable protein aggregate is regarded as the rate limiting step in the establishment of prion diseases. In these systems, once aggregates reach a critical size the growth process accelerates and thus the waiting time until the appearance of the first critically sized aggregate is a key determinant of disease onset. In addition to prion diseases, aggregation and nucleation is a central step of many physical, chemical, and biological process. Previous studies have examined the first-arrival time at a critical nucleus size during homogeneous self-assembly under the assumption that at time t =0 the system was in the all-monomer state. However, in order to compare to in vivo biological experiments where protein constituents inherited by a newly born cell likely contain intermediate aggregates, other possibilities must be considered. We consider one such possibility by conditioning the unique ergodic size distribution on subcritical aggregate sizes; this least-informed distribution is then used as an initial condition. We make the claim that this initial condition carries fewer assumptions than an all-monomer one and verify that it can yield significantly different averaged waiting times relative to the all-monomer condition under various models of assembly.
Evans, Christopher M; Love, Alyssa M; Weiss, Emily A
2012-10-17
This article reports control of the competition between step-growth and living chain-growth polymerization mechanisms in the formation of cadmium chalcogenide colloidal quantum dots (QDs) from CdSe(S) clusters by varying the concentration of anionic surfactant in the synthetic reaction mixture. The growth of the particles proceeds by step-addition from initially nucleated clusters in the absence of excess phosphinic or carboxylic acids, which adsorb as their anionic conjugate bases, and proceeds indirectly by dissolution of clusters, and subsequent chain-addition of monomers to stable clusters (Ostwald ripening) in the presence of excess phosphinic or carboxylic acid. Fusion of clusters by step-growth polymerization is an explanation for the consistent observation of so-called "magic-sized" clusters in QD growth reactions. Living chain-addition (chain addition with no explicit termination step) produces QDs over a larger range of sizes with better size dispersity than step-addition. Tuning the molar ratio of surfactant to Se(2-)(S(2-)), the limiting ionic reagent, within the living chain-addition polymerization allows for stoichiometric control of QD radius without relying on reaction time.
Cengiz, Ibrahim Fatih; Oliveira, Joaquim Miguel; Reis, Rui L
2017-08-01
Quantitative assessment of micro-structure of materials is of key importance in many fields including tissue engineering, biology, and dentistry. Micro-computed tomography (µ-CT) is an intensively used non-destructive technique. However, the acquisition parameters such as pixel size and rotation step may have significant effects on the obtained results. In this study, a set of tissue engineering scaffolds including examples of natural and synthetic polymers, and ceramics were analyzed. We comprehensively compared the quantitative results of µ-CT characterization using 15 acquisition scenarios that differ in the combination of the pixel size and rotation step. The results showed that the acquisition parameters could statistically significantly affect the quantified mean porosity, mean pore size, and mean wall thickness of the scaffolds. The effects are also practically important since the differences can be as high as 24% regarding the mean porosity in average, and 19.5 h and 166 GB regarding the characterization time and data storage per sample with a relatively small volume. This study showed in a quantitative manner the effects of such a wide range of acquisition scenarios on the final data, as well as the characterization time and data storage per sample. Herein, a clear picture of the effects of the pixel size and rotation step on the results is provided which can notably be useful to refine the practice of µ-CT characterization of scaffolds and economize the related resources.
Outward Bound to the Galaxies--One Step at a Time
ERIC Educational Resources Information Center
Ward, R. Bruce; Miller-Friedmann, Jaimie; Sienkiewicz, Frank; Antonucci, Paul
2012-01-01
Less than a century ago, astronomers began to unlock the cosmic distances within and beyond the Milky Way. Understanding the size and scale of the universe is a continuing, step-by-step process that began with the remarkably accurate measurement of the distance to the Moon made by early Greeks. In part, the authors have ITEAMS (Innovative…
NASA Astrophysics Data System (ADS)
Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf
2016-04-01
A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.
An algorithm for fast elastic wave simulation using a vectorized finite difference operator
NASA Astrophysics Data System (ADS)
Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna
2018-07-01
Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.
The effect of external forces on discrete motion within holographic optical tweezers.
Eriksson, E; Keen, S; Leach, J; Goksör, M; Padgett, M J
2007-12-24
Holographic optical tweezers is a widely used technique to manipulate the individual positions of optically trapped micron-sized particles in a sample. The trap positions are changed by updating the holographic image displayed on a spatial light modulator. The updating process takes a finite time, resulting in a temporary decrease of the intensity, and thus the stiffness, of the optical trap. We have investigated this change in trap stiffness during the updating process by studying the motion of an optically trapped particle in a fluid flow. We found a highly nonlinear behavior of the change in trap stiffness vs. changes in step size. For step sizes up to approximately 300 nm the trap stiffness is decreasing. Above 300 nm the change in trap stiffness remains constant for all step sizes up to one particle radius. This information is crucial for optical force measurements using holographic optical tweezers.
Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.
Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra
2016-11-20
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Influence of numerical dissipation in computing supersonic vortex-dominated flows
NASA Technical Reports Server (NTRS)
Kandil, O. A.; Chuang, A.
1986-01-01
Steady supersonic vortex-dominated flows are solved using the unsteady Euler equations for conical and three-dimensional flows around sharp- and round-edged delta wings. The computational method is a finite-volume scheme which uses a four-stage Runge-Kutta time stepping with explicit second- and fourth-order dissipation terms. The grid is generated by a modified Joukowski transformation. The steady flow solution is obtained through time-stepping with initial conditions corresponding to the freestream conditions, and the bow shock is captured as a part of the solution. The scheme is applied to flat-plate and elliptic-section wings with a leading edge sweep of 70 deg at an angle of attack of 10 deg and a freestream Mach number of 2.0. Three grid sizes of 29 x 39, 65 x 65 and 100 x 100 have been used. The results for sharp-edged wings show that they are consistent with all grid sizes and variation of the artificial viscosity coefficients. The results for round-edged wings show that separated and attached flow solutions can be obtained by varying the artificial viscosity coefficients. They also show that the solutions are independent of the way time stepping is done. Local time-stepping and global minimum time-steeping produce same solutions.
Adaptive Implicit Non-Equilibrium Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philip, Bobby; Wang, Zhen; Berrill, Mark A
2013-01-01
We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
NASA Astrophysics Data System (ADS)
Lafitte, Pauline; Melis, Ward; Samaey, Giovanni
2017-07-01
We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.
Schuler, Friedrich; Schwemmer, Frank; Trotter, Martin; Wadle, Simon; Zengerle, Roland; von Stetten, Felix; Paust, Nils
2015-07-07
Aqueous microdroplets provide miniaturized reaction compartments for numerous chemical, biochemical or pharmaceutical applications. We introduce centrifugal step emulsification for the fast and easy production of monodisperse droplets. Homogenous droplets with pre-selectable diameters in a range from 120 μm to 170 μm were generated with coefficients of variation of 2-4% and zero run-in time or dead volume. The droplet diameter depends on the nozzle geometry (depth, width, and step size) and interfacial tensions only. Droplet size is demonstrated to be independent of the dispersed phase flow rate between 0.01 and 1 μl s(-1), proving the robustness of the centrifugal approach. Centrifugal step emulsification can easily be combined with existing centrifugal microfluidic unit operations, is compatible to scalable manufacturing technologies such as thermoforming or injection moulding and enables fast emulsification (>500 droplets per second and nozzle) with minimal handling effort (2-3 pipetting steps). The centrifugal microfluidic droplet generation was used to perform the first digital droplet recombinase polymerase amplification (ddRPA). It was used for absolute quantification of Listeria monocytogenes DNA concentration standards with a total analysis time below 30 min. Compared to digital droplet polymerase chain reaction (ddPCR), with processing times of about 2 hours, the overall processing time of digital analysis was reduced by more than a factor of 4.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Size-controlled magnetic nanoparticles with lecithin for biomedical applications
NASA Astrophysics Data System (ADS)
Park, S. I.; Kim, J. H.; Kim, C. G.; Kim, C. O.
2007-05-01
Lecithin-adsorbed magnetic nanoparticles were prepared by three-step process that the thermal decomposition was combined with ultrasonication. Experimental parameters were three items—molar ratio between Fe(CO) 5 and oleic acid, keeping time at decomposition temperature and lecithin concentration. As the molar ratio between Fe(CO) 5 and oleic acid, and keeping time at decomposition temperature increased, the particle size increased. However, the change of lecithin concentration did not show the remarkable particle size variation.
Optimal design of neural stimulation current waveforms.
Halpern, Mark
2009-01-01
This paper contains results on the design of electrical signals for delivering charge through electrodes to achieve neural stimulation. A generalization of the usual constant current stimulation phase to a stepped current waveform is presented. The electrode current design is then formulated as the calculation of the current step sizes to minimize the peak electrode voltage while delivering a specified charge in a given number of time steps. This design problem can be formulated as a finite linear program, or alternatively by using techniques for discrete-time linear system design.
One-step preparation of antimicrobial silver nanoparticles in polymer matrix
NASA Astrophysics Data System (ADS)
Lyutakov, O.; Kalachyova, Y.; Solovyev, A.; Vytykacova, S.; Svanda, J.; Siegel, J.; Ulbrich, P.; Svorcik, V.
2015-03-01
Simple one-step procedure for in situ preparation of silver nanoparticles (AgNPs) in the polymer thin films is described. Nanoparticles (NPs) were prepared by reaction of N-methyl pyrrolidone with silver salt in semi-dry polymer film and characterized by transmission electron microscopy, XPS, and UV-Vis spectroscopy techniques. Direct synthesis of NPs in polymer has several advantages; even though it avoids time-consuming NPs mixing with polymer matrix, uniform silver distribution in polymethylmethacrylate (PMMA) films is achieved without necessity of additional stabilization. The influence of the silver concentration, reaction temperature and time on reaction conversion rate, and the size and size-distribution of the AgNPs was investigated. Polymer films doped with AgNPs were tested for their antibacterial activity on Gram-negative bacteria. Antimicrobial properties of AgNPs/PMMA films were found to be depended on NPs concentration, their size and distribution. Proposed one-step synthesis of functional polymer containing AgNPs is environmentally friendly, experimentally simple and extremely quick. It opens up new possibilities in development of antimicrobial coatings with medical and sanitation applications.
Highly accurate adaptive TOF determination method for ultrasonic thickness measurement
NASA Astrophysics Data System (ADS)
Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing
2018-04-01
Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.
Effect of sample preparation method on quantification of polymorphs using PXRD.
Alam, Shahnwaz; Patel, Sarsvatkumar; Bansal, Arvind Kumar
2010-01-01
The purpose of this study was to improve the sensitivity and accuracy of quantitative analysis of polymorphic mixtures. Various techniques such as hand grinding and mixing (in mortar and pestle), air jet milling and ball milling for micronization of particle and mixing were used to prepare binary mixtures. Using these techniques, mixtures of form I and form II of clopidogrel bisulphate were prepared in various proportions from 0-5% w/w of form I in form II and subjected to x-ray powder diffraction analysis. In order to obtain good resolution in minimum time, step time and step size were varied to optimize scan rate. Among the six combinations, step size of 0.05 degrees with step time of 5 s demonstrated identification of maximum characteristic peaks of form I in form II. Data obtained from samples prepared using both grinding and mixing in ball mill showed good analytical sensitivity and accuracy compared to other methods. Powder x-ray diffraction method was reproducible, precise with LOD of 0.29% and LOQ of 0.91%. Validation results showed excellent correlation between actual and predicted concentration with R2 > 0.9999.
Representation of Nucleation Mode Microphysics in a Global Aerosol Model with Sectional Microphysics
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Pierce, J. R.; Adams, P. J.
2013-01-01
In models, nucleation mode (1 nm
Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla
2016-01-01
Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
Radiation exposure of patient and surgeon in minimally invasive kidney stone surgery.
Demirci, A; Raif Karabacak, O; Yalçınkaya, F; Yiğitbaşı, O; Aktaş, C
2016-05-01
Percutaneous nephrolithotomy (PNL) and retrograde intrarenal surgery (RIRS) are the standard treatments used in the endoscopic treatment of kidney stones depending on the location and the size of the stone. The purpose of the study was to show the radiation exposure difference between the minimally invasive techniques by synchronously measuring the amount of radiation the patients and the surgeon received in each session, which makes our study unique. This is a prospective study which included 20 patients who underwent PNL, and 45 patients who underwent RIRS in our clinic between June 2014 and October 2014. The surgeries were assessed by dividing them into three steps: step 1: the access sheath or ureter catheter placement, step 2: lithotripsy and collection of fragments, and step 3: DJ catheter or re-entry tube insertion. For the PNL and RIRS groups, mean stone sizes were 30mm (range 16-60), and 12mm (range 7-35); mean fluoroscopy times were 337s (range 200-679), and 37s (range 7-351); and total radiation exposures were 142mBq (44.7 to 221), and 4.4mBq (0.2 to 30) respectively. Fluoroscopy times and radiation exposures at each step were found to be higher in the PNL group compared to the RIRS group. When assessed in itself, the fluoroscopy time and radiation exposure were stable in RIRS, and the radiation exposure was the highest in step 1 and the lowest in step 3 in PNL. When assessed for the 19 PNL patients and the 12 RIRS patients who had stone sizes≥2cm, the fluoroscopy time in step 1, and the radiation exposure in steps 1 and 2 were found to be higher in the PNL group than the RIRS group (P<0.001). Although there is need for more prospective randomized studies, RIRS appears to be a viable alternate for PNL because it has short fluoroscopy time and the radiation exposure is low in every step. 4. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2011-01-01
A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.
NASA Astrophysics Data System (ADS)
Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin
2017-12-01
The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.
Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition
NASA Technical Reports Server (NTRS)
Kenwright, David; Lane, David
1995-01-01
An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.
Impurity effects in crystal growth from solutions: Steady states, transients and step bunch motion
NASA Astrophysics Data System (ADS)
Ranganathan, Madhav; Weeks, John D.
2014-05-01
We analyze a recently formulated model in which adsorbed impurities impede the motion of steps in crystals grown from solutions, while moving steps can remove or deactivate adjacent impurities. In this model, the chemical potential change of an atom on incorporation/desorption to/from a step is calculated for different step configurations and used in the dynamical simulation of step motion. The crucial difference between solution growth and vapor growth is related to the dependence of the driving force for growth of the main component on the size of the terrace in front of the step. This model has features resembling experiments in solution growth, which yields a dead zone with essentially no growth at low supersaturation and the motion of large coherent step bunches at larger supersaturation. The transient behavior shows a regime wherein steps bunch together and move coherently as the bunch size increases. The behavior at large line tension is reminiscent of the kink-poisoning mechanism of impurities observed in calcite growth. Our model unifies different impurity models and gives a picture of nonequilibrium dynamics that includes both steady states and time dependent behavior and shows similarities with models of disordered systems and the pinning/depinning transition.
A review of hybrid implicit explicit finite difference time domain method
NASA Astrophysics Data System (ADS)
Chen, Juan
2018-06-01
The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.
Autonomous reinforcement learning with experience replay.
Wawrzyński, Paweł; Tanwani, Ajay Kumar
2013-05-01
This paper considers the issues of efficiency and autonomy that are required to make reinforcement learning suitable for real-life control tasks. A real-time reinforcement learning algorithm is presented that repeatedly adjusts the control policy with the use of previously collected samples, and autonomously estimates the appropriate step-sizes for the learning updates. The algorithm is based on the actor-critic with experience replay whose step-sizes are determined on-line by an enhanced fixed point algorithm for on-line neural network training. An experimental study with simulated octopus arm and half-cheetah demonstrates the feasibility of the proposed algorithm to solve difficult learning control problems in an autonomous way within reasonably short time. Copyright © 2012 Elsevier Ltd. All rights reserved.
Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes
NASA Astrophysics Data System (ADS)
Octova, A.; Sule, R.
2018-04-01
Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
NASA Astrophysics Data System (ADS)
Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir
2018-03-01
Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.
21 CFR 226.102 - Master-formula and batch-production records.
Code of Federal Regulations, 2010 CFR
2010-04-01
...(s) produced on a batch or continuous operation basis, including mixing steps and mixing times that have been determined to yield an adequately mixed Type A medicated article(s); and in the case of Type... batch size, or of appropriate size in the case of continuous systems to be produced from the master...
2017-01-01
Crystal size and shape can be manipulated to enhance the qualities of the final product. In this work the steady-state shape and size of succinic acid crystals, with and without a polymeric additive (Pluronic P123) at 350 mL, scale is reported. The effect of the amplitude of cycles as well as the heating/cooling rates is described, and convergent cycling (direct nucleation control) is compared to static cycling. The results show that the shape of succinic acid crystals changes from plate- to diamond-like after multiple cycling steps, and that the time required for this morphology change to occur is strongly related to the type of cycling. Addition of the polymer is shown to affect both the final shape of the crystals and the time needed to reach size and shape steady-state conditions. It is shown how this phenomenon can be used to improve the design of the crystallization step in order to achieve more efficient downstream operations and, in general, to help optimize the whole manufacturing process. PMID:28867966
Driven Langevin systems: fluctuation theorems and faithful dynamics
NASA Astrophysics Data System (ADS)
Sivak, David; Chodera, John; Crooks, Gavin
2014-03-01
Stochastic differential equations of motion (e.g., Langevin dynamics) provide a popular framework for simulating molecular systems. Any computational algorithm must discretize these equations, yet the resulting finite time step integration schemes suffer from several practical shortcomings. We show how any finite time step Langevin integrator can be thought of as a driven, nonequilibrium physical process. Amended by an appropriate work-like quantity (the shadow work), nonequilibrium fluctuation theorems can characterize or correct for the errors introduced by the use of finite time steps. We also quantify, for the first time, the magnitude of deviations between the sampled stationary distribution and the desired equilibrium distribution for equilibrium Langevin simulations of solvated systems of varying size. We further show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.
Basu, Amar S
2013-05-21
Emerging assays in droplet microfluidics require the measurement of parameters such as drop size, velocity, trajectory, shape deformation, fluorescence intensity, and others. While micro particle image velocimetry (μPIV) and related techniques are suitable for measuring flow using tracer particles, no tool exists for tracking droplets at the granularity of a single entity. This paper presents droplet morphometry and velocimetry (DMV), a digital video processing software for time-resolved droplet analysis. Droplets are identified through a series of image processing steps which operate on transparent, translucent, fluorescent, or opaque droplets. The steps include background image generation, background subtraction, edge detection, small object removal, morphological close and fill, and shape discrimination. A frame correlation step then links droplets spanning multiple frames via a nearest neighbor search with user-defined matching criteria. Each step can be individually tuned for maximum compatibility. For each droplet found, DMV provides a time-history of 20 different parameters, including trajectory, velocity, area, dimensions, shape deformation, orientation, nearest neighbour spacing, and pixel statistics. The data can be reported via scatter plots, histograms, and tables at the granularity of individual droplets or by statistics accrued over the population. We present several case studies from industry and academic labs, including the measurement of 1) size distributions and flow perturbations in a drop generator, 2) size distributions and mixing rates in drop splitting/merging devices, 3) efficiency of single cell encapsulation devices, 4) position tracking in electrowetting operations, 5) chemical concentrations in a serial drop dilutor, 6) drop sorting efficiency of a tensiophoresis device, 7) plug length and orientation of nonspherical plugs in a serpentine channel, and 8) high throughput tracking of >250 drops in a reinjection system. Performance metrics show that highest accuracy and precision is obtained when the video resolution is >300 pixels per drop. Analysis time increases proportionally with video resolution. The current version of the software provides throughputs of 2-30 fps, suggesting the potential for real time analysis.
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
Krajczár, Károly; Tóth, Vilmos; Nyárády, Zoltán; Szabó, Gyula
2005-06-01
The aim of the authors' study was to compare the remaining root canal wall thickness and the preparation time of root canals, prepared either with step-back technique, or with GT Rotary File, an engine driven nickel-titanium rotary instrument system. Twenty extracted molars were decoronated. Teeth were divided in two groups. In Group 1 root canals were prepared with step-back technique. In Group 2 GT Rotary File System was utilized. Preoperative vestibulo-oral X-ray pictures were taken from all teeth with radiovisiograph (RVG). The final preparations at the mesiobuccal canals (MB) were performed with size #30 and palatinal/distal canals with size #40 instruments. Postoperative RVG pictures were taken ensuring the preoperative positioning. The working time was measured in seconds during each preparation. The authors also assessed the remaining root canal wall thickness at 3, 6 and 9 mm from the radiological apex, comparing the width of the canal walls of the vestibulo-oral projections on pre- and postoperative RVG pictures both mesially and buccally. The ratios of the residual and preoperative root canal wall thickness were calculated and compared. The largest difference was found at the MB canals of the coronal and middle third level of the root, measured on the distal canal wall. The ratio of the remaining dentin wall thickness at the coronal and the middle level in the case of step-back preparation was 0.605 and 0.754, and 0.824 and 0.895 in the cases of GT files respectively. The preparation time needed for GT Rotary File System was altogether 68.7% (MB) and 52.5% (D/P canals) of corresponding step-back preparation times. The use of GT Rotary File with comparison of standard step-back method resulted in a shortened preparation time and excessive damage of the coronal part of the root canal could be avoided.
An improved maximum power point tracking method for a photovoltaic system
NASA Astrophysics Data System (ADS)
Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes
2016-06-01
In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.
One size fits all electronics for insole-based activity monitoring.
Hegde, Nagaraj; Bries, Matthew; Melanson, Edward; Sazonov, Edward
2017-07-01
Footwear based wearable sensors are becoming prominent in many areas of monitoring health and wellness, such as gait and activity monitoring. In our previous research we introduced an insole based wearable system SmartStep, which is completely integrated in a socially acceptable package. From a manufacturing perspective, SmartStep's electronics had to be custom made for each shoe size, greatly complicating the manufacturing process. In this work we explore the possibility of making a universal electronics platform for SmartStep - SmartStep 3.0, which can be used in the most common insole sizes without modifications. A pilot human subject experiments were run to compare the accuracy between the one-size fits all (SmartStep 3.0) and custom size SmartStep 2.0. A total of ~10 hours of data was collected in the pilot study involving three participants performing different activities of daily living while wearing SmartStep 2.0 and SmartStep 3.0. Leave one out cross validation resulted in a 98.5% average accuracy from SmartStep 2.0, while SmartStep 3.0 resulted in 98.3% accuracy, suggesting that the SmartStep 3.0 can be as accurate as SmartStep 2.0, while fitting most common shoe sizes.
Individual-based modelling of population growth and diffusion in discrete time.
Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone
2017-01-01
Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.
Impact of SCBA size and fatigue from different firefighting work cycles on firefighter gait.
Kesler, Richard M; Bradley, Faith F; Deetjen, Grace S; Angelini, Michael J; Petrucci, Matthew N; Rosengren, Karl S; Horn, Gavin P; Hsiao-Wecksler, Elizabeth T
2018-04-04
Risk of slips, trips and falls in firefighters maybe influenced by the firefighter's equipment and duration of firefighting. This study examined the impact of a four self-contained breathing apparatus (SCBA) three SCBA of increasing size and a prototype design and three work cycles one bout (1B), two bouts with a five-minute break (2B) and two bouts back-to-back (BB) on gait in 30 firefighters. Five gait parameters (double support time, single support time, stride length, step width and stride velocity) were examined pre- and post-firefighting activity. The two largest SCBA resulted in longer double support times relative to the smallest SCBA. Multiple bouts of firefighting activity resulted in increased single and double support time and decreased stride length, step width and stride velocity. These results suggest that with larger SCBA or longer durations of activity, firefighters may adopt more conservative gait patterns to minimise fall risk. Practitioner Summary: The effects of four self-contained breathing apparatus (SCBA) and three work cycles on five gait parameters were examined pre- and post-firefighting activity. Both SCBA size and work cycle affected gait. The two largest SCBA resulted in longer double support times. Multiple bouts of activity resulted in more conservative gait patterns.
SU-E-T-68: A Quality Assurance System with a Web Camera for High Dose Rate Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueda, Y; Hirose, A; Oohira, S
Purpose: The purpose of this work was to develop a quality assurance (QA) system for high dose rate (HDR) brachytherapy to verify the absolute position of an 192Ir source in real time and to measure dwell time and position of the source simultaneously with a movie recorded by a web camera. Methods: A web camera was fixed 15 cm above a source position check ruler to monitor and record 30 samples of the source position per second over a range of 8.0 cm, from 1425 mm to 1505 mm. Each frame had a matrix size of 480×640 in the movie.more » The source position was automatically quantified from the movie using in-house software (built with LabVIEW) that applied a template-matching technique. The source edge detected by the software on each frame was corrected to reduce position errors induced by incident light from an oblique direction. The dwell time was calculated by differential processing to displacement of the source. The performance of this QA system was illustrated by recording simple plans and comparing the measured dwell positions and time with the planned parameters. Results: This QA system allowed verification of the absolute position of the source in real time. The mean difference between automatic and manual detection of the source edge was 0.04 ± 0.04 mm. Absolute position error can be determined within an accuracy of 1.0 mm at dwell points of 1430, 1440, 1450, 1460, 1470, 1480, 1490, and 1500 mm, in three step sizes and dwell time errors, with an accuracy of 0.1% in more than 10.0 sec of planned time. The mean step size error was 0.1 ± 0.1 mm for a step size of 10.0 mm. Conclusion: This QA system provides quick verifications of the dwell position and time, with high accuracy, for HDR brachytherapy. This work was supported by the Japan Society for the Promotion of Science Core-to-Core program (No. 23003)« less
NASA Astrophysics Data System (ADS)
Roh, Joon-Woo; Jee, Joon-Bum; Lim, A.-Young; Choi, Young-Jean
2015-04-01
Korean warm-season rainfall, accounting for about three-fourths of the annual precipitation, is primarily caused by Changma front, which is a kind of the East Asian summer monsoon, and localized heavy rainfall with convective instability. Various physical mechanisms potentially exert influences on heavy precipitation over South Korea. Representatively, the middle latitude and subtropical weather fronts, associated with a quasi-stationary moisture convergence zone among varying air masses, make up one of the main rain-bearing synoptic scale systems. Localized heavy rainfall events in South Korea generally arise from mesoscale convective systems embedded in these synoptic scale disturbances along the Changma front or convective instabilities resulted from unstable air mass including the direct or indirect effect of typhoons. In recent years, torrential rainfalls, which are more than 30mm/hour of precipitation amount, in warm-season has increased threefold in Seoul, which is a metropolitan city in South Korea. In order to investigate multiple potential causes of warm-season localized heavy precipitation in South Korea, a localized heavy precipitation case took place on 20 June 2014 at Seoul. This case was mainly seen to be caused by short-wave trough, which is associated with baroclinic instability in the northwest of Korea, and a thermal low, which has high moist and warm air through analysis. This structure showed convective scale torrential rain was embedded in the dynamic and in the thermodynamic structures. In addition to, a sensitivity of rainfall amount and maximum rainfall location to the integration time-step sizes was investigated in the simulations of a localized heavy precipitation case using Weather Research and Forecasting model. The simulation of time-step sizes of 9-27s corresponding to a horizontal resolution of 4.5km and 1.5km varied slightly difference of the maximum rainfall amount. However, the sensitivity of spatial patterns and temporal variations in rainfall were relatively small for the time-step sizes. The effect of topography was also important in the localized heavy precipitation simulation.
Formation mechanism of monodispersed spherical core-shell ceria/polymer hybrid nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izu, Noriya, E-mail: n-izu@aist.go.jp; Uchida, Toshio; Matsubara, Ichiro
2011-08-15
Graphical abstract: The formation mechanism for core-shell nanoparticles is considered to be as follows: nucleation and particle growth occur simultaneously (left square); very slow particle growth occurs (middle square). Highlights: {yields} The size of the resultant nanoparticles was strongly and complicatedly dependent on the set temperature used during reflux heating and the PVP molecular weight. {yields} The size of the nanoparticles increased by a 2-step process as the reflux heating time increased. {yields} The IR spectral changes with increasing reflux time indicated the increase in the number of cross-linked polymers in the shell. -- Abstract: Very unique core-shell ceria (ceriummore » oxide)/polymer hybrid nanoparticles that have monodispersed spherical structures and are easily dispersed in water or alcohol without the need for a dispersant were reported recently. The formation mechanism of the unique nanoparticles, however, was not clear. In order to clarify the formation mechanism, these nanoparticles were prepared using a polyol method (reflux heating) under varied conditions of temperature, time, and concentration and molecular weight of added polymer (poly(vinylpyrrolidone)). The size of the resultant nanoparticles was strongly and complicatedly dependent on the set temperature used during reflux heating and the poly(vinylpyrrolidone) molecular weight. Furthermore, the size of the nanoparticles increased by a 2-step process as the reflux heating time increased. The IR spectral changes with increasing reflux time indicated the increase in the number of cross-linked polymers in the shell. From these results, the formation mechanism was discussed and proposed.« less
Sample size calculations for stepped wedge and cluster randomised trials: a unified approach
Hemming, Karla; Taljaard, Monica
2016-01-01
Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808
Nonlinear Multiscale Transformations: From Synchronization to Error Control
2001-07-01
transformation (plus the quantization step) has taken place, a lossless Lempel - Ziv compression algorithm is applied to reduce the size of the transformed... compressed data are all very close, however the visual quality of the reconstructed image is significantly better for the EC compression algorithm ...used in recent times in the first step of transform coding algorithms for image compression . Ideally, a multiscale transformation allows for an
Lu, Xing; Zhao, Guoqun; Zhou, Jixue; Zhang, Cunsheng; Yu, Junquan
2018-04-29
In this paper, a new type of low-cost Mg-3.36Zn-1.06Sn-0.33Mn-0.27Ca (wt %) alloy ingot with a diameter of 130 mm and a length of 4800 mm was fabricated by semicontinuous casting. The microstructure and mechanical properties at different areas of the ingot were investigated. The microstructure and mechanical properties of the alloy under different one-step and two-step homogenization conditions were studied. For the as-cast alloy, the average grain size and the second phase size decrease from the center to the surface of the ingot, while the area fraction of the second phase increases gradually. At one-half of the radius of the ingot, the alloy presents the optimum comprehensive mechanical properties along the axial direction, which is attributed to the combined effect of relatively small grain size, low second-phase fraction, and uniform microstructure. For the as-homogenized alloy, the optimum two-step homogenization process parameters were determined as 340 °C × 10 h + 520 °C × 16 h. After the optimum homogenization, the proper size and morphology of CaMgSn phase are conducive to improve the microstructure uniformity and the mechanical properties of the alloy. Besides, the yield strength of the alloy is reduced by 20.7% and the elongation is increased by 56.3%, which is more favorable for the subsequent hot deformation processing.
Analysis of real-time numerical integration methods applied to dynamic clamp experiments.
Butera, Robert J; McCarthy, Maeve L
2004-12-01
Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.
N-terminus of Cardiac Myosin Essential Light Chain Modulates Myosin Step-Size
Wang, Yihua; Ajtai, Katalin; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta; Burghardt, Thomas P.
2016-01-01
Muscle myosin cyclically hydrolyzes ATP to translate actin. Ventricular cardiac myosin (βmys) moves actin with three distinct unitary step-sizes resulting from its lever-arm rotation and with step-frequencies that are modulated in a myosin regulation mechanism. The lever-arm associated essential light chain (vELC) binds actin by its 43 residue N-terminal extension. Unitary steps were proposed to involve the vELC N-terminal extension with the 8 nm step engaging the vELC/actin bond facilitating an extra ~19 degrees of lever-arm rotation while the predominant 5 nm step forgoes vELC/actin binding. A minor 3 nm step is the unlikely conversion of the completed 5 to the 8 nm step. This hypothesis was tested using a 17 residue N-terminal truncated vELC in porcine βmys (Δ17βmys) and a 43 residue N-terminal truncated human vELC expressed in transgenic mouse heart (Δ43αmys). Step-size and step-frequency were measured using the Qdot motility assay. Both Δ17βmys and Δ43αmys had significantly increased 5 nm step-frequency and coincident loss in the 8 nm step-frequency compared to native proteins suggesting the vELC/actin interaction drives step-size preference. Step-size and step-frequency probability densities depend on the relative fraction of truncated vELC and relate linearly to pure myosin species concentrations in a mixture containing native vELC homodimer, two truncated vELCs in the modified homodimer, and one native and one truncated vELC in the heterodimer. Step-size and step-frequency, measured for native homodimer and at two or more known relative fractions of truncated vELC, are surmised for each pure species by using a new analytical method. PMID:26671638
Thermomechanical treatment of alloys
Bates, John F.; Brager, Howard R.; Paxton, Michael M.
1983-01-01
An article of an alloy of AISI 316 stainless steel is reduced in size to predetermined dimensions by cold working in repeated steps. Before the last reduction step the article is annealed by heating within a temperature range, specifically between 1010.degree. C. and 1038.degree. C. for a time interval between 90 and 60 seconds depending on the actual temperature. By this treatment the swelling under neutron bombardment by epithermal neutrons is reduced while substantial recrystallization does not occur in actual use for a time interval of at least of the order of 5000 hours.
Efficiency and flexibility using implicit methods within atmosphere dycores
NASA Astrophysics Data System (ADS)
Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.
2016-12-01
A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.
1976-01-01
An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.
Two step continuous method to synthesize colloidal spheroid gold nanorods.
Chandra, S; Doran, J; McCormack, S J
2015-12-01
This research investigated a two-step continuous process to synthesize colloidal suspension of spheroid gold nanorods. In the first step; gold precursor was reduced to seed-like particles in the presence of polyvinylpyrrolidone and ascorbic acid. In continuous second step; silver nitrate and alkaline sodium hydroxide produced various shape and size Au nanoparticles. The shape was manipulated through weight ratio of ascorbic acid to silver nitrate by varying silver nitrate concentration. The specific weight ratio of 1.35-1.75 grew spheroid gold nanorods of aspect ratio ∼1.85 to ∼2.2. Lower weight ratio of 0.5-1.1 formed spherical nanoparticle. The alkaline medium increased the yield of gold nanorods and reduced reaction time at room temperature. The synthesized gold nanorods retained their shape and size in ethanol. The surface plasmon resonance was red shifted by ∼5 nm due to higher refractive index of ethanol than water. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Noble, David R.; Georgiadis, John G.; Buckius, Richard O.
1996-07-01
The lattice Boltzmann method (LBM) is used to simulate flow in an infinite periodic array of octagonal cylinders. Results are compared with those obtained by a finite difference (FD) simulation solved in terms of streamfunction and vorticity using an alternating direction implicit scheme. Computed velocity profiles are compared along lines common to both the lattice Boltzmann and finite difference grids. Along all such slices, both streamwise and transverse velocity predictions agree to within 05% of the average streamwise velocity. The local shear on the surface of the cylinders also compares well, with the only deviations occurring in the vicinity of the corners of the cylinders, where the slope of the shear is discontinuous. When a constant dimensionless relaxation time is maintained, LBM exhibits the same convergence behaviour as the FD algorithm, with the time step increasing as the square of the grid size. By adjusting the relaxation time such that a constant Mach number is achieved, the time step of LBM varies linearly with the grid size. The efficiency of LBM on the CM-5 parallel computer at the National Center for Supercomputing Applications (NCSA) is evaluated by examining each part of the algorithm. Overall, a speed of 139 GFLOPS is obtained using 512 processors for a domain size of 2176×2176.
NASA Astrophysics Data System (ADS)
Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.
2014-01-01
We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.
NASA Technical Reports Server (NTRS)
Janus, J. Mark; Whitfield, David L.
1990-01-01
Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.
NASA Astrophysics Data System (ADS)
Prasetya, A.; Mawadati, A.; Putri, A. M. R.; Petrus, H. T. B. M.
2018-01-01
Comminution is one of crucial steps in gold ore processing used to liberate the valuable minerals from gaunge mineral. This research is done to find the particle size distribution of gold ore after it has been treated through the comminution process in a rod mill with various number of rod and rotational speed that will results in one optimum milling condition. For the initial step, Sumbawa gold ore was crushed and then sieved to pass the 2.5 mesh and retained on the 5 mesh (this condition was taken to mimic real application in artisanal gold mining). Inserting the prepared sample into the rod mill, the observation on effect of rod-number and rotational speed was then conducted by variating the rod number of 7 and 10 while the rotational speed was varied from 60, 85, and 110 rpm. In order to be able to provide estimation on particle distribution of every condition, the comminution kinetic was applied by taking sample at 15, 30, 60, and 120 minutes for size distribution analysis. The change of particle distribution of top and bottom product as time series was then treated using Rosin-Rammler distribution equation. The result shows that the homogenity of particle size and particle size distribution is affected by rod-number and rotational speed. The particle size distribution is more homogeneous by increasing of milling time, regardless of rod-number and rotational speed. Mean size of particles do not change significantly after 60 minutes milling time. Experimental results showed that the optimum condition was achieved at rotational speed of 85 rpm, using rod-number of 7.
Real-time inverse planning for Gamma Knife radiosurgery.
Wu, Q Jackie; Chankong, Vira; Jitprapaikulsarn, Suradet; Wessels, Barry W; Einstein, Douglas B; Mathayomchan, Boonyanit; Kinsella, Timothy J
2003-11-01
The challenges of real-time Gamma Knife inverse planning are the large number of variables involved and the unknown search space a priori. With limited collimator sizes, shots have to be heavily overlapped to form a smooth prescription isodose line that conforms to the irregular target shape. Such overlaps greatly influence the total number of shots per plan, making pre-determination of the total number of shots impractical. However, this total number of shots usually defines the search space, a pre-requisite for most of the optimization methods. Since each shot only covers part of the target, a collection of shots in different locations and various collimator sizes selected makes up the global dose distribution that conforms to the target. Hence, planning or placing these shots is a combinatorial optimization process that is computationally expensive by nature. We have previously developed a theory of shot placement and optimization based on skeletonization. The real-time inverse planning process, reported in this paper, is an expansion and the clinical implementation of this theory. The complete planning process consists of two steps. The first step is to determine an optimal number of shots including locations and sizes and to assign initial collimator size to each of the shots. The second step is to fine-tune the weights using a linear-programming technique. The objective function is to minimize the total dose to the target boundary (i.e., maximize the dose conformity). Results of an ellipsoid test target and ten clinical cases are presented. The clinical cases are also compared with physician's manual plans. The target coverage is more than 99% for manual plans and 97% for all the inverse plans. The RTOG PITV conformity indices for the manual plans are between 1.16 and 3.46, compared to 1.36 to 2.4 for the inverse plans. All the inverse plans are generated in less than 2 min, making real-time inverse planning a reality.
Fabrication of Large Bulk High Temperature Superconducting Articles
NASA Technical Reports Server (NTRS)
Koczor, Ronald (Inventor); Hiser, Robert A. (Inventor)
2003-01-01
A method of fabricating large bulk high temperature superconducting articles which comprises the steps of selecting predetermined sizes of crystalline superconducting materials and mixing these specific sizes of particles into a homogeneous mixture which is then poured into a die. The die is placed in a press and pressurized to predetermined pressure for a predetermined time and is heat treated in the furnace at predetermined temperatures for a predetermined time. The article is left in the furnace to soak at predetermined temperatures for a predetermined period of time and is oxygenated by an oxygen source during the soaking period.
Zhao, Meijuan; Christie, Maureen; Coleman, Jonathan; Hassell, Chris; Gosbell, Ken; Lisovski, Simeon; Minton, Clive; Klaassen, Marcel
2017-01-01
Migrants have been hypothesised to use different migration strategies between seasons: a time-minimization strategy during their pre-breeding migration towards the breeding grounds and an energy-minimization strategy during their post-breeding migration towards the wintering grounds. Besides season, we propose body size as a key factor in shaping migratory behaviour. Specifically, given that body size is expected to correlate negatively with maximum migration speed and that large birds tend to use more time to complete their annual life-history events (such as moult, breeding and migration), we hypothesise that large-sized species are time stressed all year round. Consequently, large birds are not only likely to adopt a time-minimization strategy during pre-breeding migration, but also during post-breeding migration, to guarantee a timely arrival at both the non-breeding (i.e. wintering) and breeding grounds. We tested this idea using individual tracks across six long-distance migratory shorebird species (family Scolopacidae) along the East Asian-Australasian Flyway varying in size from 50 g to 750 g lean body mass. Migration performance was compared between pre- and post-breeding migration using four quantifiable migratory behaviours that serve to distinguish between a time- and energy-minimization strategy, including migration speed, number of staging sites, total migration distance and step length from one site to the next. During pre- and post-breeding migration, the shorebirds generally covered similar distances, but they tended to migrate faster, used fewer staging sites, and tended to use longer step lengths during pre-breeding migration. These seasonal differences are consistent with the prediction that a time-minimization strategy is used during pre-breeding migration, whereas an energy-minimization strategy is used during post-breeding migration. However, there was also a tendency for the seasonal difference in migration speed to progressively disappear with an increase in body size, supporting our hypothesis that larger species tend to use time-minimization strategies during both pre- and post-breeding migration. Our study highlights that body size plays an important role in shaping migratory behaviour. Larger migratory bird species are potentially time constrained during not only the pre- but also the post-breeding migration. Conservation of their habitats during both seasons may thus be crucial for averting further population declines.
Breast cancer mitosis detection in histopathological images with spatial feature extraction
NASA Astrophysics Data System (ADS)
Albayrak, Abdülkadir; Bilgin, Gökhan
2013-12-01
In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.
Apparatus and method for the determination of grain size in thin films
Maris, Humphrey J
2000-01-01
A method for the determination of grain size in a thin film sample comprising the steps of measuring first and second changes in the optical response of the thin film, comparing the first and second changes to find the attenuation of a propagating disturbance in the film and associating the attenuation of the disturbance to the grain size of the film. The second change in optical response is time delayed from the first change in optical response.
Apparatus and method for the determination of grain size in thin films
Maris, Humphrey J
2001-01-01
A method for the determination of grain size in a thin film sample comprising the steps of measuring first and second changes in the optical response of the thin film, comparing the first and second changes to find the attenuation of a propagating disturbance in the film and associating the attenuation of the disturbance to the grain size of the film. The second change in optical response is time delayed from the first change in optical response.
Advanced in Visualization of 3D Time-Dependent CFD Solutions
NASA Technical Reports Server (NTRS)
Lane, David A.; Lasinski, T. A. (Technical Monitor)
1995-01-01
Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.
Automatic stage identification of Drosophila egg chamber based on DAPI images
Jia, Dongyu; Xu, Qiuping; Xie, Qian; Mio, Washington; Deng, Wu-Min
2016-01-01
The Drosophila egg chamber, whose development is divided into 14 stages, is a well-established model for developmental biology. However, visual stage determination can be a tedious, subjective and time-consuming task prone to errors. Our study presents an objective, reliable and repeatable automated method for quantifying cell features and classifying egg chamber stages based on DAPI images. The proposed approach is composed of two steps: 1) a feature extraction step and 2) a statistical modeling step. The egg chamber features used are egg chamber size, oocyte size, egg chamber ratio and distribution of follicle cells. Methods for determining the on-site of the polytene stage and centripetal migration are also discussed. The statistical model uses linear and ordinal regression to explore the stage-feature relationships and classify egg chamber stages. Combined with machine learning, our method has great potential to enable discovery of hidden developmental mechanisms. PMID:26732176
Sources of spurious force oscillations from an immersed boundary method for moving-body problems
NASA Astrophysics Data System (ADS)
Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo
2011-04-01
When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.
Steepest descent method implementation on unconstrained optimization problem using C++ program
NASA Astrophysics Data System (ADS)
Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.
2018-03-01
Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Qdot Labeled Actin Super Resolution Motility Assay Measures Low Duty Cycle Muscle Myosin Step-Size
Wang, Yihua; Ajtai, Katalin; Burghardt, Thomas P.
2013-01-01
Myosin powers contraction in heart and skeletal muscle and is a leading target for mutations implicated in inheritable muscle diseases. During contraction, myosin transduces ATP free energy into the work of muscle shortening against resisting force. Muscle shortening involves relative sliding of myosin and actin filaments. Skeletal actin filaments were fluorescence labeled with a streptavidin conjugate quantum dot (Qdot) binding biotin-phalloidin on actin. Single Qdot’s were imaged in time with total internal reflection fluorescence microscopy then spatially localized to 1-3 nanometers using a super-resolution algorithm as they translated with actin over a surface coated with skeletal heavy meromyosin (sHMM) or full length β-cardiac myosin (MYH7). Average Qdot-actin velocity matches measurements with rhodamine-phalloidin labeled actin. The sHMM Qdot-actin velocity histogram contains low velocity events corresponding to actin translation in quantized steps of ~5 nm. The MYH7 velocity histogram has quantized steps at 3 and 8 nm in addition to 5 nm, and, larger compliance than sHMM depending on MYH7 surface concentration. Low duty cycle skeletal and cardiac myosin present challenges for a single molecule assay because actomyosin dissociates quickly and the freely moving element diffuses away. The in vitro motility assay has modestly more actomyosin interactions and methylcellulose inhibited diffusion to sustain the complex while preserving a subset of encounters that do not overlap in time on a single actin filament. A single myosin step is isolated in time and space then characterized using super-resolution. The approach provides quick, quantitative, and inexpensive step-size measurement for low duty cycle muscle myosin. PMID:23383646
NASA Astrophysics Data System (ADS)
Lopez-Sanchez, Marco; Llana-Fúnez, Sergio
2016-04-01
The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The conclusion of this work is that both the Saltykov and the two-step methods are accurate and simple enough to be useful in practice in rocks, alloys or ceramics with near-equant grains and expected lognormal distributions. The Saltykov method is particularly suitable to estimate the volumes of particular grain fractions, while the two-step method to quantify the full GSD (mean and standard deviation in log grain size). The two-step method is implemented in a free, open-source and easy-to-handle script (see http://marcoalopez.github.io/GrainSizeTools/).
NASA Astrophysics Data System (ADS)
Hammann, Eva; Zappe, Andrea; Keis, Stefanie; Ernst, Stefan; Matthies, Doreen; Meier, Thomas; Cook, Gregory M.; Börsch, Michael
2012-02-01
Thermophilic enzymes operate at high temperatures but show reduced activities at room temperature. They are in general more stable during preparation and, accordingly, are considered to be more rigid in structure. Crystallization is often easier compared to proteins from bacteria growing at ambient temperatures, especially for membrane proteins. The ATP-producing enzyme FoF1-ATP synthase from thermoalkaliphilic Caldalkalibacillus thermarum strain TA2.A1 is driven by a Fo motor consisting of a ring of 13 c-subunits. We applied a single-molecule Förster resonance energy transfer (FRET) approach using duty cycle-optimized alternating laser excitation (DCO-ALEX) to monitor the expected 13-stepped rotary Fo motor at work. New FRET transition histograms were developed to identify the smaller step sizes compared to the 10-stepped Fo motor of the Escherichia coli enzyme. Dwell time analysis revealed the temperature and the LDAO dependence of the Fo motor activity on the single molecule level. Back-and-forth stepping of the Fo motor occurs fast indicating a high flexibility in the membrane part of this thermophilic enzyme.
van Mantgem, P.J.; Stephenson, N.L.
2005-01-01
1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.
In Vitro and In Vivo Single Myosin Step-Sizes in Striated Muscle a
Burghardt, Thomas P.; Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin
2016-01-01
Myosin in muscle transduces ATP free energy into the mechanical work of moving actin. It has a motor domain transducer containing ATP and actin binding sites, and, mechanical elements coupling motor impulse to the myosin filament backbone providing transduction/mechanical-coupling. The mechanical coupler is a lever-arm stabilized by bound essential and regulatory light chains. The lever-arm rotates cyclically to impel bound filamentous actin. Linear actin displacement due to lever-arm rotation is the myosin step-size. A high-throughput quantum dot labeled actin in vitro motility assay (Qdot assay) measures motor step-size in the context of an ensemble of actomyosin interactions. The ensemble context imposes a constant velocity constraint for myosins interacting with one actin filament. In a cardiac myosin producing multiple step-sizes, a “second characterization” is step-frequency that adjusts longer step-size to lower frequency maintaining a linear actin velocity identical to that from a shorter step-size and higher frequency actomyosin cycle. The step-frequency characteristic involves and integrates myosin enzyme kinetics, mechanical strain, and other ensemble affected characteristics. The high-throughput Qdot assay suits a new paradigm calling for wide surveillance of the vast number of disease or aging relevant myosin isoforms that contrasts with the alternative model calling for exhaustive research on a tiny subset myosin forms. The zebrafish embryo assay (Z assay) performs single myosin step-size and step-frequency assaying in vivo combining single myosin mechanical and whole muscle physiological characterizations in one model organism. The Qdot and Z assays cover “bottom-up” and “top-down” assaying of myosin characteristics. PMID:26728749
Kang, Xinchen; Zhang, Jianling; Shang, Wenting; Wu, Tianbin; Zhang, Peng; Han, Buxing; Wu, Zhonghua; Mo, Guang; Xing, Xueqing
2014-03-12
Stable porous ionic liquid-water gel induced by inorganic salts was created for the first time. The porous gel was used to develop a one-step method to synthesize supported metal nanocatalysts. Au/SiO2, Ru/SiO2, Pd/Cu(2-pymo)2 metal-organic framework (Cu-MOF), and Au/polyacrylamide (PAM) were synthesized, in which the supports had hierarchical meso- and macropores, the size of the metal nanocatalysts could be very small (<1 nm), and the size distribution was very narrow even when the metal loading amount was as high as 8 wt %. The catalysts were extremely active, selective, and stable for oxidative esterification of benzyl alcohol to methyl benzoate, benzene hydrogenation to cyclohexane, and oxidation of benzyl alcohol to benzaldehyde because they combined the advantages of the nanocatalysts of small size and hierarchical porosity of the supports. In addition, this method is very simple.
Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.
2016-12-01
The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187
Capture of shrinking targets with realistic shrink patterns.
Hoffmann, Errol R; Chan, Alan H S; Dizmen, Coskun
2013-01-01
Previous research [Hoffmann, E. R. 2011. "Capture of Shrinking Targets." Ergonomics 54 (6): 519-530] reported experiments for capture of shrinking targets where the target decreased in size at a uniform rate. This work extended this research for targets having a shrink-size versus time pattern that of an aircraft receding from an observer. In Experiment 1, the time to capture the target in this case was well correlated in terms of Fitts' index of difficulty, measured at the time of capture of the target, a result that is in agreement with the 'balanced' model of Johnson and Hart [Johnson, W. W., and Hart, S. G. 1987. "Step Tracking Shrinking Targets." Proceedings of the human factors society 31st annual meeting, New York City, October 1987, 248-252]. Experiment 2 measured the probability of target capture for varying initial target sizes and target shrink rates constant, defined as the time for the target to shrink to half its initial size. Data of shrink time constant for 50% probability of capture were related to initial target size but did not greatly affect target capture as the rate of target shrinking decreased rapidly with time.
A 3D particle Monte Carlo approach to studying nucleation
NASA Astrophysics Data System (ADS)
Köhn, Christoph; Enghoff, Martin Bødker; Svensmark, Henrik
2018-06-01
The nucleation of sulphuric acid molecules plays a key role in the formation of aerosols. We here present a three dimensional particle Monte Carlo model to study the growth of sulphuric acid clusters as well as its dependence on the ambient temperature and the initial particle density. We initiate a swarm of sulphuric acid-water clusters with a size of 0.329 nm with densities between 107 and 108 cm-3 at temperatures between 200 and 300 K and a relative humidity of 50%. After every time step, we update the position of particles as a function of size-dependent diffusion coefficients. If two particles encounter, we merge them and add their volumes and masses. Inversely, we check after every time step whether a polymer evaporates liberating a molecule. We present the spatial distribution as well as the size distribution calculated from individual clusters. We also calculate the nucleation rate of clusters with a radius of 0.85 nm as a function of time, initial particle density and temperature. The nucleation rates obtained from the presented model agree well with experimentally obtained values and those of a numerical model which serves as a benchmark of our code. In contrast to previous nucleation models, we here present for the first time a code capable of tracing individual particles and thus of capturing the physics related to the discrete nature of particles.
Fast time- and frequency-domain finite-element methods for electromagnetic analysis
NASA Astrophysics Data System (ADS)
Lee, Woochan
Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.
Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand
DeLuca, Samuel; Khar, Karen; Meiler, Jens
2015-01-01
RosettaLigand has been successfully used to predict binding poses in protein-small molecule complexes. However, the RosettaLigand docking protocol is comparatively slow in identifying an initial starting pose for the small molecule (ligand) making it unfeasible for use in virtual High Throughput Screening (vHTS). To overcome this limitation, we developed a new sampling approach for placing the ligand in the protein binding site during the initial ‘low-resolution’ docking step. It combines the translational and rotational adjustments to the ligand pose in a single transformation step. The new algorithm is both more accurate and more time-efficient. The docking success rate is improved by 10–15% in a benchmark set of 43 protein/ligand complexes, reducing the number of models that typically need to be generated from 1000 to 150. The average time to generate a model is reduced from 50 seconds to 10 seconds. As a result we observe an effective 30-fold speed increase, making RosettaLigand appropriate for docking medium sized ligand libraries. We demonstrate that this improved initial placement of the ligand is critical for successful prediction of an accurate binding position in the ‘high-resolution’ full atom refinement step. PMID:26207742
Effect of water hardness on cardiovascular mortality: an ecological time series approach.
Lake, I R; Swift, L; Catling, L A; Abubakar, I; Sabel, C E; Hunter, P R
2010-12-01
Numerous studies have suggested an inverse relationship between drinking water hardness and cardiovascular disease. However, the weight of evidence is insufficient for the WHO to implement a health-based guideline for water hardness. This study followed WHO recommendations to assess the feasibility of using ecological time series data from areas exposed to step changes in water hardness to investigate this issue. Monthly time series of cardiovascular mortality data, subdivided by age and sex, were systematically collected from areas reported to have undergone step changes in water hardness, calcium and magnesium in England and Wales between 1981 and 2005. Time series methods were used to investigate the effect of water hardness changes on mortality. No evidence was found of an association between step changes in drinking water hardness or drinking water calcium and cardiovascular mortality. The lack of areas with large populations and a reasonable change in magnesium levels precludes a definitive conclusion about the impact of this cation. We use our results on the variability of the series to consider the data requirements (size of population, time of water hardness change) for such a study to have sufficient power. Only data from areas with large populations (>500,000) are likely to be able to detect a change of the size suggested by previous studies (rate ratio of 1.06). Ecological time series studies of populations exposed to changes in drinking water hardness may not be able to provide conclusive evidence on the links between water hardness and cardiovascular mortality unless very large populations are studied. Investigations of individuals may be more informative.
Evolution of Particle Size Distributions in Fragmentation Over Time
NASA Astrophysics Data System (ADS)
Charalambous, C. A.; Pike, W. T.
2013-12-01
We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.
Quadratic adaptive algorithm for solving cardiac action potential models.
Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing
2016-10-01
An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.
Improvements of the particle-in-cell code EUTERPE for petascaling machines
NASA Astrophysics Data System (ADS)
Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Kleiber, Ralf; Castejón, Francisco; Cela, José M.
2011-09-01
In the present work we report some performance measures and computational improvements recently carried out using the gyrokinetic code EUTERPE (Jost, 2000 [1] and Jost et al., 1999 [2]), which is based on the general particle-in-cell (PIC) method. The scalability of the code has been studied for up to sixty thousand processing elements and some steps towards a complete hybridization of the code were made. As a numerical example, non-linear simulations of Ion Temperature Gradient (ITG) instabilities have been carried out in screw-pinch geometry and the results are compared with earlier works. A parametric study of the influence of variables (step size of the time integrator, number of markers, grid size) on the quality of the simulation is presented.
Oisjöen, Fredrik; Schneiderman, Justin F; Astalan, Andrea Prieto; Kalabukhov, Alexey; Johansson, Christer; Winkler, Dag
2010-01-15
We demonstrate a one-step wash-free bioassay measurement system capable of tracking biochemical binding events. Our approach combines the high resolution of frequency- and high speed of time-domain measurements in a single device in combination with a fast one-step bioassay. The one-step nature of our magnetic nanoparticle (MNP) based assay reduces the time between sample extraction and quantitative results while mitigating the risks of contamination related to washing steps. Our method also enables tracking of binding events, providing the possibility of, for example, investigation of how chemical/biological environments affect the rate of a binding process or study of the action of certain drugs. We detect specific biological binding events occurring on the surfaces of fluid-suspended MNPs that modify their magnetic relaxation behavior. Herein, we extrapolate a modest sensitivity to analyte of 100 ng/ml with the present setup using our rapid one-step bioassay. More importantly, we determine the size-distributions of the MNP systems with theoretical fits to our data obtained from the two complementary measurement modalities and demonstrate quantitative agreement between them. Copyright 2009 Elsevier B.V. All rights reserved.
Reis, Andre F; Giannini, Marcelo; Pereira, Patricia N R
2007-09-01
The aim of this study was to evaluate the ability of etch-and-rinse and self-etching adhesive systems to prevent time- and water-induced nanoleakage in resin-dentin interfaces over a 6-month storage period. Five commercial adhesives were tested, which comprise three different strategies of bonding resins to tooth hard tissues: one single-step self-etching adhesive (One-up Bond F (OB), Tokuyama); two two-step self-etching primers (Clearfil SE Bond (SE) and an antibacterial fluoride-containing system, Clearfil Protect Bond (CP), Kuraray Inc.); two two-step etch-and-rinse adhesives (Single Bond (SB), 3M ESPE and Prime&Bond NT (PB), Dentsply). Restored teeth were sectioned into 0.9 mm thick slabs and stored in water or mineral oil for 24 h, 3 or 6 months. A silver tracer solution was used to reveal nanometer-sized water-filled spaces and changes that occurred over time within resin-dentin interfaces. Characterization of interfaces was performed with the TEM. The two two-step self-etching primers showed little silver uptake during the 6-month experiment. Etch-and-rinse adhesives exhibited silver deposits predominantly within the hybrid layer (HL), which significantly increased for SB after water-storage. The one-step self-etching adhesive OB presented massive silver accumulation within the HL and water-trees protruding into the adhesive layer, which increased in size and quantity after water-storage. After storage in oil, reduced silver deposition was observed at the interfaces for all groups. Different levels of water-induced nanoleakage were observed for the different bonding strategies. The two-step self-etching primers, especially the antibacterial fluoride-containing system CP, showed the least nanoleakage after 6 months of storage in water.
Effective Capital Provision Within Government. Methodologies for Right-Sizing Base Infrastructure
2005-01-01
unknown distributions, since they more accurately represent the complexity of real -world problems. Forecasting uncertain future demand flows is critical to...ordering system with no time lags and no additional costs for instantaneous delivery, shortage and holding costs would be eliminated, because the...order a fixed quantity, Q. 4.1.4 Analyzed Time Step Time is an important dimension in inventory models, since the way the system changes over time affects
Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M
2003-10-01
Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.
NASA Astrophysics Data System (ADS)
Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong
2009-12-01
Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.
Research on optimal DEM cell size for 3D visualization of loess terraces
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei
2009-10-01
In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.
Effect of the chemical treatments on the characteristics of natural cellulose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sosiati, H., E-mail: hsosiati@ugm.ac.id; Muhaimin, M.; Abdilah, P.
2014-09-25
In order to characterize the morphology and size distribution of the cellulose fibers, natural cellulose from kenaf bast fibers was extracted using two chemical treatments; (1) alkali-bleaching-ultrasonic treatment and (2) alkali-bleaching-hydrolysis. Solutions of NaOH, H{sub 2}O{sub 2} and H{sub 2}SO{sub 4} were used for alkalization, bleaching and hydrolysis, respectively. The hydrolyzed fibers were centrifuged at a rotation speed of 10000 rpm for 10 min to separate the nanofibers from the microfibers. The separation was repeated in 7 steps by controlling pH of the solution in each step until neutrality was reached. Fourier transform infrared (FTIR) spectroscopy was performed on themore » fibers at the final step of each treatment: i.e. either ultrasonic treated- or hydrolyzed microfibers. Their FTIR spectra were compared with FTIR spectrum of a reference commercial α-cellulose. Changes in morphology and size distribution of the treated fibers were examined by scanning electron microscopy (SEM). FTIR spectra of ultrasonic treated- and hydrolyzed microfibers nearly coincided with the FTIR spectrum of commercial α-cellulose, suggesting successful extraction of cellulose. Ultrasonic treatment for 6 h resulted in a specific morphology in which cellulose nanofibers (≥100 nm) were distributed across the entire surface of cellulose microfibers (∼5 μm). Constant magnetic stirring combined with acid hydrolysis resulted in an inhomogeneous size distribution of both cellulose rods (500 nm-3 μm length, 100–200 nm diameter) and particles 100–200 nm in size. Changes in morphology of the cellulose fibers depended upon the stirring time; longer stirring time resulted in shorter fiber lengths.« less
NASA Astrophysics Data System (ADS)
Kashfuddoja, Mohammad; Prasath, R. G. R.; Ramji, M.
2014-11-01
In this work, the experimental characterization of polymer-matrix and polymer based carbon fiber reinforced composite laminate by employing a whole field non-contact digital image correlation (DIC) technique is presented. The properties are evaluated based on full field data obtained from DIC measurements by performing a series of tests as per ASTM standards. The evaluated properties are compared with the results obtained from conventional testing and analytical models and they are found to closely match. Further, sensitivity of DIC parameters on material properties is investigated and their optimum value is identified. It is found that the subset size has more influence on material properties as compared to step size and their predicted optimum value for the case of both matrix and composite material is found consistent with each other. The aspect ratio of region of interest (ROI) chosen for correlation should be the same as that of camera resolution aspect ratio for better correlation. Also, an open cutout panel made of the same composite laminate is taken into consideration to demonstrate the sensitivity of DIC parameters on predicting complex strain field surrounding the hole. It is observed that the strain field surrounding the hole is much more sensitive to step size rather than subset size. Lower step size produced highly pixilated strain field, showing sensitivity of local strain at the expense of computational time in addition with random scattered noisy pattern whereas higher step size mitigates the noisy pattern at the expense of losing the details present in data and even alters the natural trend of strain field leading to erroneous maximum strain locations. The subset size variation mainly presents a smoothing effect, eliminating noise from strain field while maintaining the details in the data without altering their natural trend. However, the increase in subset size significantly reduces the strain data at hole edge due to discontinuity in correlation. Also, the DIC results are compared with FEA prediction to ascertain the suitable value of DIC parameters towards better accuracy.
Detection of melting by X-ray imaging at high pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Li; Weidner, Donald J.
2014-06-15
The occurrence of partial melting at elevated pressure and temperature is documented in real time through measurement of volume strain induced by a fixed temperature change. Here we present the methodology for measuring volume strains to one part in 10{sup −4} for mm{sup 3} sized samples in situ as a function of time during a step in temperature. By calibrating the system for sample thermal expansion at temperatures lower than the solidus, the onset of melting can be detected when the melting volume increase is of comparable size to the thermal expansion induced volume change. We illustrate this technique withmore » a peridotite sample at 1.5 GPa during partial melting. The Re capsule is imaged with a CCD camera at 20 frames/s. Temperature steps of 100 K induce volume strains that triple with melting. The analysis relies on image comparison for strain determination and the thermal inertia of the sample is clearly seen in the time history of the volume strain. Coupled with a thermodynamic model of the melting, we infer that we identify melting with 2 vol.% melting.« less
NASA Astrophysics Data System (ADS)
Ngo, Chi-Vinh; Chun, Doo-Man
2017-07-01
Recently, the fabrication of superhydrophobic metallic surfaces by means of pulsed laser texturing has been developed. After laser texturing, samples are typically chemically coated or aged in ambient air for a relatively long time of several weeks to achieve superhydrophobicity. To accelerate the wettability transition from hydrophilicity to superhydrophobicity without the use of additional chemical treatment, a simple annealing post process has been developed. In the present work, grid patterns were first fabricated on stainless steel by a nanosecond pulsed laser, then an additional low-temperature annealing post process at 100 °C was applied. The effect of 100-500 μm step size of the textured grid upon the wettability transition time was also investigated. The proposed post process reduced the transition time from a couple of months to within several hours. All samples showed superhydrophobicity with contact angles greater than 160° and sliding angles smaller than 10° except samples with 500 μm step size, and could be applied in several potential applications such as self-cleaning and control of water adhesion.
Simulation of Micron-Sized Debris Populations in Low Earth Orbit
NASA Technical Reports Server (NTRS)
Xu, Y.-L.; Hyde, J. L.; Prior, T.; Matney, Mark
2010-01-01
The update of ORDEM2000, the NASA Orbital Debris Engineering Model, to its new version ORDEM2010, is nearly complete. As a part of the ORDEM upgrade, this paper addresses the simulation of micro-debris (greater than 10 m and smaller than 1 mm in size) populations in low Earth orbit. The principal data used in the modeling of the micron-sized debris populations are in-situ hypervelocity impact records, accumulated in post-flight damage surveys on the space-exposed surfaces of returned spacecrafts. The development of the micro-debris model populations follows the general approach to deriving other ORDEM2010-required input populations for various components and types of debris. This paper describes the key elements and major steps in the statistical inference of the ORDEM2010 micro-debris populations. A crucial step is the construction of a degradation/ejecta source model to provide prior information on the micron-sized objects (such as orbital and object-size distributions). Another critical step is to link model populations with data, which is rather involved. It demands detailed information on area-time/directionality for all the space-exposed elements of a shuttle orbiter and damage laws, which relate impact damage with the physical properties of a projectile and impact conditions such as impact angle and velocity. Also needed are model-predicted debris fluxes as a function of object size and impact velocity from all possible directions. In spite of the very limited quantity of the available shuttle impact data, the population-derivation process is satisfactorily stable. Final modeling results obtained from shuttle window and radiator impact data are reasonably convergent and consistent, especially for the debris populations with object-size thresholds at 10 and 100 m.
Simulation of Micron-Sized Debris Populations in Low Earth Orbit
NASA Technical Reports Server (NTRS)
Xu, Y.-L.; Matney, M.; Liou, J.-C.; Hyde, J. L.; Prior, T. G.
2010-01-01
The update of ORDEM2000, the NASA Orbital Debris Engineering Model, to its new version . ORDEM2010, is nearly complete. As a part of the ORDEM upgrade, this paper addresses the simulation of micro-debris (greater than 10 micron and smaller than 1 mm in size) populations in low Earth orbit. The principal data used in the modeling of the micron-sized debris populations are in-situ hypervelocity impact records, accumulated in post-flight damage surveys on the space-exposed surfaces of returned spacecrafts. The development of the micro-debris model populations follows the general approach to deriving other ORDEM2010-required input populations for various components and types of debris. This paper describes the key elements and major steps in the statistical inference of the ORDEM2010 micro-debris populations. A crucial step is the construction of a degradation/ejecta source model to provide prior information on the micron-sized objects (such as orbital and object-size distributions). Another critical step is to link model populations with data, which is rather involved. It demands detailed information on area-time/directionality for all the space-exposed elements of a shuttle orbiter and damage laws, which relate impact damage with the physical properties of a projectile and impact conditions such as impact angle and velocity. Also needed are model-predicted debris fluxes as a function of object size and impact velocity from all possible directions. In spite of the very limited quantity of the available shuttle impact data, the population-derivation process is satisfactorily stable. Final modeling results obtained from shuttle window and radiator impact data are reasonably convergent and consistent, especially for the debris populations with object-size thresholds at 10 and 100 micron.
DECISION-MAKING ALIGNED WITH RAPID-CYCLE EVALUATION IN HEALTH CARE.
Schneeweiss, Sebastian; Shrank, William H; Ruhl, Michael; Maclure, Malcolm
2015-01-01
Availability of real-time electronic healthcare data provides new opportunities for rapid-cycle evaluation (RCE) of health technologies, including healthcare delivery and payment programs. We aim to align decision-making processes with stages of RCE to optimize the usefulness and impact of rapid results. Rational decisions about program adoption depend on program effect size in relation to externalities, including implementation cost, sustainability, and likelihood of broad adoption. Drawing on case studies and experience from drug safety monitoring, we examine how decision makers have used scientific evidence on complex interventions in the past. We clarify how RCE alters the nature of policy decisions; develop the RAPID framework for synchronizing decision-maker activities with stages of RCE; and provide guidelines on evidence thresholds for incremental decision-making. In contrast to traditional evaluations, RCE provides early evidence on effectiveness and facilitates a stepped approach to decision making in expectation of future regularly updated evidence. RCE allows for identification of trends in adjusted effect size. It supports adapting a program in midstream in response to interim findings, or adapting the evaluation strategy to identify true improvements earlier. The 5-step RAPID approach that utilizes the cumulating evidence of program effectiveness over time could increase policy-makers' confidence in expediting decisions. RCE enables a step-wise approach to HTA decision-making, based on gradually emerging evidence, reducing delays in decision-making processes after traditional one-time evaluations.
Self-propelled motion of Au-Si droplets on Si(111) mediated by monoatomic step dissolution
NASA Astrophysics Data System (ADS)
Curiotto, S.; Leroy, F.; Cheynis, F.; Müller, P.
2015-02-01
By Low Energy Electron Microscopy, we show that the spontaneous motion of gold droplets on silicon (111) is chemically driven: the droplets tend to dissolve silicon monoatomic steps to reach the temperature-dependent Au-Si equilibrium stoichiometry. According to the droplet size, the motion details are different. In the first stages of Au deposition small droplets nucleate at steps and move continuously on single terraces. The droplets temporarily pin at each step they meet during their motion. During pinning, the growing droplets become supersaturated in Au. They depin from the steps when a notch nucleate on the upper step. Then the droplets climb up and locally dissolve the Si steps, leaving behind them deep tracks formed by notched steps. Measurements of the dissolution rate and the displacement lengths enable us to describe quantitatively the motion mechanism, also in terms of anisotropy of Si dissolution kinetics. Scaling laws for the droplet position as a function of time are proposed: x ∝ tn with 1/3 < n < 2/3.
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-04-25
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Microstickies agglomeration by electric field.
Du, Xiaotang Tony; Hsieh, Jeffery S
2016-01-01
Microstickies deposits on both paper machine and paper products when it agglomerates under step change in ionic strength, pH, temperature and chemical additives. These stickies increase the down time of the paper mill and decrease the quality of paper. The key property of microstickies is its smaller size, which leads to low removal efficiency and difficulties in measurement. Thus the increase of microstickies size help improve both removal efficiency and reduce measurement difficulty. In this paper, a new agglomeration technology based on electric field was investigated. The electric treatment could also increase the size of stickies particles by around 100 times. The synergetic effect between electric field treatment and detacky chemicals/dispersants, including polyvinyl alcohol, poly(diallylmethylammonium chloride) and lignosulfonate, was also studied.
NASA Astrophysics Data System (ADS)
Kwon, Deuk-Chul; Shin, Sung-Sik; Yu, Dong-Hun
2017-10-01
In order to reduce the computing time in simulation of radio frequency (rf) plasma sources, various numerical schemes were developed. It is well known that the upwind, exponential, and power-law schemes can efficiently overcome the limitation on the grid size for fluid transport simulations of high density plasma discharges. Also, the semi-implicit method is a well-known numerical scheme to overcome on the simulation time step. However, despite remarkable advances in numerical techniques and computing power over the last few decades, efficient multi-dimensional modeling of low temperature plasma discharges has remained a considerable challenge. In particular, there was a difficulty on parallelization in time for the time periodic steady state problems such as capacitively coupled plasma discharges and rf sheath dynamics because values of plasma parameters in previous time step are used to calculate new values each time step. Therefore, we present a parallelization method for the time periodic steady state problems by using period-slices. In order to evaluate the efficiency of the developed method, one-dimensional fluid simulations are conducted for describing rf sheath dynamics. The result shows that speedup can be achieved by using a multithreading method.
Affordable Hybrid Heat Pump Clothes Dryer
DOE Office of Scientific and Technical Information (OSTI.GOV)
TeGrotenhuis, Ward E.; Butterfield, Andrew; Caldwell, Dustin D.
This project was successful in demonstrating the feasibility of a step change in residential clothes dryer energy efficiency by demonstrating heat pump technology capable of 50% energy savings over conventional standard-size electric dryers with comparable drying times. A prototype system was designed from off-the-shelf components that can meet the project’s efficiency goals and are affordable. An experimental prototype system was built based on the design that reached 50% energy savings. Improvements have been identified that will reduce drying times of over 60 minutes to reach the goal of 40 minutes. Nevertheless, the prototype represents a step change in efficiency overmore » heat pump dryers recently introduced to the U.S. market, with 30% improvement in energy efficiency at comparable drying times.« less
Image editing with Adobe Photoshop 6.0.
Caruso, Ronald D; Postel, Gregory C
2002-01-01
The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002
Are randomly grown graphs really random?
Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H
2001-10-01
We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.
Magnetic properties of mechanically alloyed Mn-Al-C powders
NASA Astrophysics Data System (ADS)
Kohmoto, O.; Kageyama, N.; Kageyama, Y.; Haji, H.; Uchida, M.; Matsushima, Y.
2011-01-01
We have prepared supersaturated-solution Mn-Al-C alloy powders by mechanical alloying using a planetary high-energy mill. The starting materials were pure Mn, Al and C powers. The mechanically-alloyed powders were subjected to a two-step heating. Although starting particles are Al and Mn with additive C, the Al peak disappears with MA time. With increasing MA time, transition from α-Mn to β-Mn does not occur; the α-Mn structure maintains. At 100 h, a single phase of supersaturated-solution α-Mn is obtained. The lattice constant of α-Mn decreases with increasing MA time. From the Scherrer formula, the crystallite size at 500 h is obtained as 200Å, which does not mean amorphous state. By two-step heating, high magnetization (66 emu/g) was obtained from short-time-milled powders (t=10 h). The precursor of the as-milled powder is not a single phase α-Mn but contains small amount of fcc Al. After two-step heating, the powder changes to τ-phase. Although the saturation magnetization increases, the value is less than that by conventional bulk MnAl (88 emu/g). Meanwhile, long-time-milled powder of single α-Mn phase results in low magnetization (5.2 emu/g) after two-step heating.
NASA Astrophysics Data System (ADS)
Abdul Ghani, B.
2005-09-01
"TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.
Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.
Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun
2011-12-01
Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.
Lock, Nina; Bremholm, Martin; Christensen, Mogens; Almer, Jonathan; Chen, Yu-Sheng; Iversen, Bo B
2009-12-14
Boehmite (AlOOH) nanoparticles have been synthesized in subcritical (300 bar, 350 degrees C) and supercritical (300 bar, 400 degrees C) water. The formation and growth of AlOOH nanoparticles were studied in situ by small- and wide-angle X-ray scattering (SAXS and WAXS) using 80 keV synchrotron radiation. The SAXS/WAXS data were measured simultaneously with a time resolution greater than 10 s and revealed the initial nucleation of amorphous particles takes place within 10 s with subsequent crystallization after 30 s. No diffraction signals were observed from Al(OH)(3) within the time resolution of the experiment, which shows that the dehydration step of the reaction is fast and the hydrolysis step rate-determining. The sizes of the crystalline particles were determined as a function of time. The overall size evolution patterns are similar in sub- and supercritical water, but the growth is faster and the final particle size larger under supercritical conditions. After approximately 5 min, the rate of particle growth decreases in both sub- and supercritical water. Heating of the boehmite nanoparticle suspension allowed an in situ X-ray investigation of the phase transformation of boehmite to aluminium oxide. Under the wet conditions used in this work, the transition starts at 530 degrees C and gives a two-phase product of hydrated and non-hydrated aluminium oxide.
Lessons from Alternative Grading: Essential Qualities of Teacher Feedback
ERIC Educational Resources Information Center
Percell, Jay C.
2017-01-01
One critically important step in the instructional process is providing feedback to students, and yet, providing timely and thorough feedback is often lacking due attention. Reasons for this oversight could range from several factors including increased class sizes, vast content coverage requirements, extracurricular responsibilities, and the…
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
A Conformational Transition in the Myosin VI Converter Contributes to the Variable Step Size
Ovchinnikov, V.; Cecchini, M.; Vanden-Eijnden, E.; Karplus, M.
2011-01-01
Myosin VI (MVI) is a dimeric molecular motor that translocates backwards on actin filaments with a surprisingly large and variable step size, given its short lever arm. A recent x-ray structure of MVI indicates that the large step size can be explained in part by a novel conformation of the converter subdomain in the prepowerstroke state, in which a 53-residue insert, unique to MVI, reorients the lever arm nearly parallel to the actin filament. To determine whether the existence of the novel converter conformation could contribute to the step-size variability, we used a path-based free-energy simulation tool, the string method, to show that there is a small free-energy difference between the novel converter conformation and the conventional conformation found in other myosins. This result suggests that MVI can bind to actin with the converter in either conformation. Models of MVI/MV chimeric dimers show that the variability in the tilting angle of the lever arm that results from the two converter conformations can lead to step-size variations of ∼12 nm. These variations, in combination with other proposed mechanisms, could explain the experimentally determined step-size variability of ∼25 nm for wild-type MVI. Mutations to test the findings by experiment are suggested. PMID:22098742
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Zhanying; Department of Applied Science, University of Québec at Chicoutimi, Saguenay, QC G7H 2B1; Zhao, Gang
2016-04-15
The effects of two homogenization treatments applied to the direct chill (DC) cast billet on the recrystallization behavior in 7150 aluminum alloy during post-rolling annealing have been investigated using the electron backscatter diffraction (EBSD) technique. Following hot and cold rolling to the sheet, measured orientation maps, the recrystallization fraction and grain size, the misorientation angle and the subgrain size were used to characterize the recovery and recrystallization processes at different annealing temperatures. The results were compared between the conventional one-step homogenization and the new two-step homogenization, with the first step being pretreated at 250 °C. Al{sub 3}Zr dispersoids with highermore » densities and smaller sizes were obtained after the two-step homogenization, which strongly retarded subgrain/grain boundary mobility and inhibited recrystallization. Compared with the conventional one-step homogenized samples, a significantly lower recrystallized fraction and a smaller recrystallized grain size were obtained under all annealing conditions after cold rolling in the two-step homogenized samples. - Highlights: • Effects of two homogenization treatments on recrystallization in 7150 Al sheets • Quantitative study on the recrystallization evolution during post-rolling annealing • Al{sub 3}Zr dispersoids with higher densities and smaller sizes after two-step treatment • Higher recrystallization resistance of 7150 sheets with two-step homogenization.« less
SWAP-Assembler 2: Optimization of De Novo Genome Assembler at Large Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Jintao; Seo, Sangmin; Balaji, Pavan
2016-08-16
In this paper, we analyze and optimize the most time-consuming steps of the SWAP-Assembler, a parallel genome assembler, so that it can scale to a large number of cores for huge genomes with the size of sequencing data ranging from terabyes to petabytes. According to the performance analysis results, the most time-consuming steps are input parallelization, k-mer graph construction, and graph simplification (edge merging). For the input parallelization, the input data is divided into virtual fragments with nearly equal size, and the start position and end position of each fragment are automatically separated at the beginning of the reads. Inmore » k-mer graph construction, in order to improve the communication efficiency, the message size is kept constant between any two processes by proportionally increasing the number of nucleotides to the number of processes in the input parallelization step for each round. The memory usage is also decreased because only a small part of the input data is processed in each round. With graph simplification, the communication protocol reduces the number of communication loops from four to two loops and decreases the idle communication time. The optimized assembler is denoted as SWAP-Assembler 2 (SWAP2). In our experiments using a 1000 Genomes project dataset of 4 terabytes (the largest dataset ever used for assembling) on the supercomputer Mira, the results show that SWAP2 scales to 131,072 cores with an efficiency of 40%. We also compared our work with both the HipMER assembler and the SWAP-Assembler. On the Yanhuang dataset of 300 gigabytes, SWAP2 shows a 3X speedup and 4X better scalability compared with the HipMer assembler and is 45 times faster than the SWAP-Assembler. The SWAP2 software is available at https://sourceforge.net/projects/swapassembler.« less
Large scale crystallization of protein pharmaceuticals in microgravity via temperature change
NASA Technical Reports Server (NTRS)
Long, Marianna M.
1992-01-01
The major objective of this research effort is the temperature driven growth of protein crystals in large batches in the microgravity environment of space. Pharmaceutical houses are developing protein products for patient care, for example, human insulin, human growth hormone, interferons, and tissue plasminogen activator or TPA, the clot buster for heart attack victims. Except for insulin, these are very high value products; they are extremely potent in small quantities and have a great value per gram of material. It is feasible that microgravity crystallization can be a cost recoverable, economically sound final processing step in their manufacture. Large scale protein crystal growth in microgravity has significant advantages from the basic science and the applied science standpoints. Crystal growth can proceed unhindered due to lack of surface effects. Dynamic control is possible and relatively easy. The method has the potential to yield large quantities of pure crystalline product. Crystallization is a time honored procedure for purifying organic materials and microgravity crystallization could be the final step to remove trace impurities from high value protein pharmaceuticals. In addition, microgravity grown crystals could be the final formulation for those medicines that need to be administered in a timed release fashion. Long lasting insulin, insulin lente, is such a product. Also crystalline protein pharmaceuticals are more stable for long-term storage. Temperature, as the initiation step, has certain advantages. Again, dynamic control of the crystallization process is possible and easy. A temperature step is non-invasive and is the most subtle way to control protein solubility and therefore crystallization. Seeding is not necessary. Changes in protein and precipitant concentrations and pH are not necessary. Finally, this method represents a new way to crystallize proteins in space that takes advantage of the unique microgravity environment. The results from two flights showed that the hardware performed perfectly, many crystals were produced, and they were much larger than their ground grown controls. Morphometric analysis was done on over 4,000 crystals to establish crystal size, size distribution, and relative size. Space grown crystals were remarkably larger than their earth grown counterparts and crystal size was a function of PCF volume. That size distribution for the space grown crystals was a function of PCF volume may indicate that ultimate size was a function of temperature gradient. Since the insulin protein concentration was very low, 0.4 mg/ml, the size distribution could also be following the total amount of protein in each of the PCF's. X-ray analysis showed that the bigger space grown insulin crystals diffracted to higher resolution than their ground grown controls. When the data were normalized for size, they still indicated that the space crystals were better than the ground crystals.
Schlenstedt, Christian; Mancini, Martina; Horak, Fay; Peterson, Daniel
2017-07-01
To characterize anticipatory postural adjustments (APAs) across a variety of step initiation tasks in people with Parkinson disease (PD) and healthy subjects. Cross-sectional study. Step initiation was analyzed during self-initiated gait, perceptual cued gait, and compensatory forward stepping after platform perturbation. People with PD were assessed on and off levodopa. University research laboratory. People (N=31) with PD (n=19) and healthy aged-matched subjects (n=12). Not applicable. Mediolateral (ML) size of APAs (calculated from center of pressure recordings), step kinematics, and body alignment. With respect to self-initiated gait, the ML size of APAs was significantly larger during the cued condition and significantly smaller during the compensatory condition (P<.001). Healthy subjects and patients with PD did not differ in body alignment during the stance phase prior to stepping. No significant group effect was found for ML size of APAs between healthy subjects and patients with PD. However, the reduction in APA size from cued to compensatory stepping was significantly less pronounced in PD off medication compared with healthy subjects, as indicated by a significant group by condition interaction effect (P<.01). No significant differences were found comparing patients with PD on and off medications. Specific stepping conditions had a significant effect on the preparation and execution of step initiation. Therefore, APA size should be interpreted with respect to the specific stepping condition. Across-task changes in people with PD were less pronounced compared with healthy subjects. Antiparkinsonian medication did not significantly improve step initiation in this mildly affected PD cohort. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Critical Motor Number for Fractional Steps of Cytoskeletal Filaments in Gliding Assays
Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan
2012-01-01
In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using Brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, . Because of thermal fluctuations, fractional filament steps are only detectable as long as . The corresponding fractional filament step size is where is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be , and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number depends on the elastic stalk properties and is reduced to for linear springs with a nonzero rest length. Furthermore, is shown to depend quadratically on the motor step size . Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number . Finally, we show that fractional filament steps are also detectable for a fixed average motor number as determined by the surface density (or coverage) of the motors on the substrate surface. PMID:22927953
Two-Step Sintering Behavior of Sol-Gel Derived Dense and Submicron-Grained YIG Ceramics
NASA Astrophysics Data System (ADS)
Chen, Ruoyuan; Zhou, Jijun; Zheng, Liang; Zheng, Hui; Zheng, Peng; Ying, Zhihua; Deng, Jiangxia
2018-04-01
In this work, dense and submicron-grain yttrium iron garnet (YIG, Y3Fe5O12) ceramics were fabricated by a two-step sintering (TSS) method using nano-size YIG powder prepared by a citrate sol-gel method. The densification, microstructure, magnetic properties and ferromagnetic resonance (FMR) linewidth of the ceramics were investigated. The sample prepared at 1300°C in T 1, 1225°C in T 2 and 18 h holding time has a density higher than 98% of the theoretical value and exhibits a homogeneous microstructure with fine grain size (0.975 μm). In addition, the saturation magnetization ( M S) of this sample reaches 27.18 emu/g. High density and small grain size can also achieve small FMR linewidth. Consequently, these results show that the sol-gel process combined with the TSS process can effectively suppress grain-boundary migration while maintaining active grain-boundary diffusion to obtain dense and fine-grained YIG ceramics with appropriate magnetic properties.
Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.
Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng
2013-01-01
Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.
Real-time feedback control of twin-screw wet granulation based on image analysis.
Madarász, Lajos; Nagy, Zsombor Kristóf; Hoffer, István; Szabó, Barnabás; Csontos, István; Pataki, Hajnalka; Démuth, Balázs; Szabó, Bence; Csorba, Kristóf; Marosi, György
2018-06-04
The present paper reports the first dynamic image analysis-based feedback control of continuous twin-screw wet granulation process. Granulation of the blend of lactose and starch was selected as a model process. The size and size distribution of the obtained particles were successfully monitored by a process camera coupled with an image analysis software developed by the authors. The validation of the developed system showed that the particle size analysis tool can determine the size of the granules with an error of less than 5 µm. The next step was to implement real-time feedback control of the process by controlling the liquid feeding rate of the pump through a PC, based on the real-time determined particle size results. After the establishment of the feedback control, the system could correct different real-life disturbances, creating a Process Analytically Controlled Technology (PACT), which guarantees the real-time monitoring and controlling of the quality of the granules. In the event of changes or bad tendencies in the particle size, the system can automatically compensate the effect of disturbances, ensuring proper product quality. This kind of quality assurance approach is especially important in the case of continuous pharmaceutical technologies. Copyright © 2018 Elsevier B.V. All rights reserved.
Bifurcation analysis of a discrete-time ratio-dependent predator-prey model with Allee Effect
NASA Astrophysics Data System (ADS)
Cheng, Lifang; Cao, Hongjun
2016-09-01
A discrete-time predator-prey model with Allee effect is investigated in this paper. We consider the strong and the weak Allee effect (the population growth rate is negative and positive at low population density, respectively). From the stability analysis and the bifurcation diagrams, we get that the model with Allee effect (strong or weak) growth function and the model with logistic growth function have somewhat similar bifurcation structures. If the predator growth rate is smaller than its death rate, two species cannot coexist due to having no interior fixed points. When the predator growth rate is greater than its death rate and other parameters are fixed, the model can have two interior fixed points. One is always unstable, and the stability of the other is determined by the integral step size, which decides the species coexistence or not in some extent. If we increase the value of the integral step size, then the bifurcated period doubled orbits or invariant circle orbits may arise. So the numbers of the prey and the predator deviate from one stable state and then circulate along the period orbits or quasi-period orbits. When the integral step size is increased to a critical value, chaotic orbits may appear with many uncertain period-windows, which means that the numbers of prey and predator will be chaotic. In terms of bifurcation diagrams and phase portraits, we know that the complexity degree of the model with strong Allee effect decreases, which is related to the fact that the persistence of species can be determined by the initial species densities.
Signs of the Times: Signage in the Library.
ERIC Educational Resources Information Center
Johnson, Carolyn
1993-01-01
Discusses the use of signs in libraries and lists 12 steps to create successful signage. Highlights include consistency, location, color, size, lettering, types of material, user needs, signage policy, planning, in-house fabrication versus vendors, and evaluation, A selected bibliography of 24 sources of information on library signage is included.…
Individual analyses of Lévy walk in semi-free ranging Tonkean macaques (Macaca tonkeana).
Sueur, Cédric; Briard, Léa; Petit, Odile
2011-01-01
Animals adapt their movement patterns to their environment in order to maximize their efficiency when searching for food. The Lévy walk and the Brownian walk are two types of random movement found in different species. Studies have shown that these random movements can switch from a Brownian to a Lévy walk according to the size distribution of food patches. However no study to date has analysed how characteristics such as sex, age, dominance or body mass affect the movement patterns of an individual. In this study we used the maximum likelihood method to examine the nature of the distribution of step lengths and waiting times and assessed how these distributions are influenced by the age and the sex of group members in a semi free-ranging group of ten Tonkean macaques. Individuals highly differed in their activity budget and in their movement patterns. We found an effect of age and sex of individuals on the power distribution of their step lengths and of their waiting times. The males and old individuals displayed a higher proportion of longer trajectories than females and young ones. As regards waiting times, females and old individuals displayed higher rates of long stationary periods than males and young individuals. These movement patterns resembling random walks can probably be explained by the animals moving from one location to other known locations. The power distribution of step lengths might be due to a power distribution of food patches in the enclosure while the power distribution of waiting times might be due to the power distribution of the patch sizes.
NASA Astrophysics Data System (ADS)
Yang, Haijian; Sun, Shuyu; Yang, Chao
2017-03-01
Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.
Tavakoli, Mohammad Mahdi; Gu, Leilei; Gao, Yuan; Reckmeier, Claas; He, Jin; Rogach, Andrey L.; Yao, Yan; Fan, Zhiyong
2015-01-01
Organometallic trihalide perovskites are promising materials for photovoltaic applications, which have demonstrated a rapid rise in photovoltaic performance in a short period of time. We report a facile one-step method to fabricate planar heterojunction perovskite solar cells by chemical vapor deposition (CVD), with a solar power conversion efficiency of up to 11.1%. We performed a systematic optimization of CVD parameters such as temperature and growth time to obtain high quality films of CH3NH3PbI3 and CH3NH3PbI3-xClx perovskite. Scanning electron microscopy and time resolved photoluminescence data showed that the perovskite films have a large grain size of more than 1 micrometer, and carrier life-times of 10 ns and 120 ns for CH3NH3PbI3 and CH3NH3PbI3-xClx, respectively. This is the first demonstration of a highly efficient perovskite solar cell using one step CVD and there is likely room for significant improvement of device efficiency. PMID:26392200
Modeling myosin VI stepping dynamics
NASA Astrophysics Data System (ADS)
Tehver, Riina
Myosin VI is a molecular motor that transports intracellular cargo as well as acts as an anchor. The motor has been measured to have unusually large step size variation and it has been reported to make both long forward and short inchworm-like forward steps, as well as step backwards. We have been developing a model that incorporates this diverse stepping behavior in a consistent framework. Our model allows us to predict the dynamics of the motor under different conditions and investigate the evolutionary advantages of the large step size variation.
The evolution of complex life cycles when parasite mortality is size- or time-dependent.
Ball, M A; Parker, G A; Chubb, J C
2008-07-07
In complex cycles, helminth larvae in their intermediate hosts typically grow to a fixed size. We define this cessation of growth before transmission to the next host as growth arrest at larval maturity (GALM). Where the larval parasite controls its own growth in the intermediate host, in order that growth eventually arrests, some form of size- or time-dependent increase in its death rate must apply. In contrast, the switch from growth to sexual reproduction in the definitive host can be regulated by constant (time-independent) mortality as in standard life history theory. We here develop a step-wise model for the evolution of complex helminth life cycles through trophic transmission, based on the approach of Parker et al. [2003a. Evolution of complex life cycles in helminth parasites. Nature London 425, 480-484], but which includes size- or time-dependent increase in mortality rate. We assume that the growing larval parasite has two components to its death rate: (i) a constant, size- or time-independent component, and (ii) a component that increases with size or time in the intermediate host. When growth stops at larval maturity, there is a discontinuous change in mortality to a constant (time-independent) rate. This model generates the same optimal size for the parasite larva at GALM in the intermediate host whether the evolutionary approach to the complex life cycle is by adding a new host above the original definitive host (upward incorporation), or below the original definitive host (downward incorporation). We discuss some unexplored problems for cases where complex life cycles evolve through trophic transmission.
TRUST84. Sat-Unsat Flow in Deformable Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narasimhan, T.N.
1984-11-01
TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less
Evaluation of the Actuator Line Model with coarse resolutions
NASA Astrophysics Data System (ADS)
Draper, M.; Usera, G.
2015-06-01
The aim of the present paper is to evaluate the Actuator Line Model (ALM) in spatial resolutions coarser than what is generally recommended, also using larger time steps. To accomplish this, the ALM has been implemented in the open source code caffa3d.MBRi and validated against experimental measurements of two wind tunnel campaigns (stand alone wind turbine and two wind turbines in line, case A and B respectively), taking into account two spatial resolutions: R/8 and R/15 (R is the rotor radius). A sensitivity analysis in case A was performed in order to get some insight into the influence of the smearing factor (3D Gaussian distribution) and time step size in power and thrust, as well as in the wake, without applying a tip loss correction factor (TLCF), for one tip speed ratio (TSR). It is concluded that as the smearing factor is larger or time step size is smaller the power is increased, but the velocity deficit is not as much affected. From this analysis, a smearing factor was obtained in order to calculate precisely the power coefficient for that TSR without applying TLCF. Results with this approach were compared with another simulation choosing a larger smearing factor and applying Prandtl's TLCF, for three values of TSR. It is found that applying the TLCF improves the power estimation and weakens the influence of the smearing factor. Finally, these 2 alternatives were tested in case B, confirming that conclusion.
Learn, R; Feigenbaum, E
2016-06-01
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Learn, R.; Feigenbaum, E.
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
Learn, R.; Feigenbaum, E.
2016-05-27
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
Molecular simulation of small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2012-11-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
An exact and efficient first passage time algorithm for reaction-diffusion processes on a 2D-lattice
NASA Astrophysics Data System (ADS)
Bezzola, Andri; Bales, Benjamin B.; Alkire, Richard C.; Petzold, Linda R.
2014-01-01
We present an exact and efficient algorithm for reaction-diffusion-nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for large ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.
An exact and efficient first passage time algorithm for reaction–diffusion processes on a 2D-lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezzola, Andri, E-mail: andri.bezzola@gmail.com; Bales, Benjamin B., E-mail: bbbales2@gmail.com; Alkire, Richard C., E-mail: r-alkire@uiuc.edu
2014-01-01
We present an exact and efficient algorithm for reaction–diffusion–nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for largemore » ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.« less
Ilha, Jocemar; Centenaro, Lígia A; Broetto Cunha, Núbia; de Souza, Daniela F; Jaeger, Mariane; do Nascimento, Patrícia S; Kolling, Janaína; Ben, Juliana; Marcuzzo, Simone; Wyse, Angela T S; Gottfried, Carmem; Achaval, Matilde
2011-06-01
Several studies have shown that treadmill training improves neurological outcomes and promotes plasticity in lumbar spinal cord of spinal animals. The morphological and biochemical mechanisms underlying these phenomena remain unclear. The purpose of this study was to provide evidence of activity-dependent plasticity in spinal cord segment (L5) below a complete spinal cord transection (SCT) at T8-9 in rats in which the lower spinal cord segments have been fully separated from supraspinal control and that subsequently underwent treadmill step training. Five days after SCT, spinal animals started a step-training program on a treadmill with partial body weight support and manual step help. Hindlimb movements were evaluated over time and scored on the basis of the open-field BBB scale and were significantly improved at post-injury weeks 8 and 10 in trained spinal animals. Treadmill training also showed normalization of withdrawal reflex in trained spinal animals, which was significantly different from the untrained animals at post-injury weeks 8 and 10. Additionally, compared to controls, spinal rats had alpha motoneuronal soma size atrophy and reduced synaptophysin protein expression and Na(+), K(+)-ATPase activity in lumbar spinal cord. Step-trained rats had motoneuronal soma size, synaptophysin expression and Na(+), K(+)-ATPase activity similar to control animals. These findings suggest that treadmill step training can promote activity-dependent neural plasticity in lumbar spinal cord, which may lead to neurological improvements without supraspinal descending control after complete spinal cord injury.
A computational study of coherent structures in the wakes of two-dimensional bluff bodies
NASA Astrophysics Data System (ADS)
Pearce, Jeffrey Alan
1988-08-01
The periodic shedding of vortices from bluff bodies was first recognized in the late 1800's. Currently, there is great interest concerning the effect of vortex shedding on structures and on vehicle stability. In the design of bluff structures which will be exposed to a flow, knowledge of the shedding frequency and the amplitude of the aerodynamic forces is critical. The ability to computationally predict parameters associated with periodic vortex shedding is thus a valuable tool. In this study, the periodic shedding of vortices from several bluff body geometries is predicted. The study is conducted with a two-dimensional finite-difference code employed on various grid sizes. The effects of the grid size and time step on the accuracy of the solution are addressed. Strouhal numbers and aerodynamic force coefficients are computed for all of the bodies considered and compared with previous experimental results. Results indicate that the finite-difference code is capable of predicting periodic vortex shedding for all of the geometries tested. Refinement of the finite-difference grid was found to give little improvement in the prediction; however, the choice of time step size was shown to be critical. Predictions of Strouhal numbers were generally accurate, and the calculated aerodynamic forces generally exhibited behavior consistent with previous studies.
Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir
2016-05-01
Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Variable aperture-based ptychographical iterative engine method
NASA Astrophysics Data System (ADS)
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.
Code of Federal Regulations, 2011 CFR
2011-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2013 CFR
2013-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2012 CFR
2012-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2010 CFR
2010-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.
Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga
2015-11-01
Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.
A Three-Pool Model Dissecting Readily Releasable Pool Replenishment at the Calyx of Held
Guo, Jun; Ge, Jian-long; Hao, Mei; Sun, Zhi-cheng; Wu, Xin-sheng; Zhu, Jian-bing; Wang, Wei; Yao, Pan-tong; Lin, Wei; Xue, Lei
2015-01-01
Although vesicle replenishment is critical in maintaining exo-endocytosis recycling, the underlying mechanisms are not well understood. Previous studies have shown that both rapid and slow endocytosis recycle into a very large recycling pool instead of within the readily releasable pool (RRP), and the time course of RRP replenishment is slowed down by more intense stimulation. This finding contradicts the calcium/calmodulin-dependence of RRP replenishment. Here we address this issue and report a three-pool model for RRP replenishment at a central synapse. Both rapid and slow endocytosis provide vesicles to a large reserve pool (RP) ~42.3 times the RRP size. When moving from the RP to the RRP, vesicles entered an intermediate pool (IP) ~2.7 times the RRP size with slow RP-IP kinetics and fast IP-RRP kinetics, which was responsible for the well-established slow and rapid components of RRP replenishment. Depletion of the IP caused the slower RRP replenishment observed after intense stimulation. These results establish, for the first time, a realistic cycling model with all parameters measured, revealing the contribution of each cycling step in synaptic transmission. The results call for modification of the current view of the vesicle recycling steps and their roles. PMID:25825223
Cycle time and cost reduction in large-size optics production
NASA Astrophysics Data System (ADS)
Hallock, Bob; Shorey, Aric; Courtney, Tom
2005-09-01
Optical fabrication process steps have remained largely unchanged for decades. Raw glass blanks have been rough-machined, generated to near net shape, loose abrasive or fine bound diamond ground and then polished. This set of processes is sequential and each subsequent operation removes the damage and micro cracking induced by the prior operational step. One of the long-lead aspects of this process has been the glass polishing. Primarily, this has been driven by the need to remove relatively large volumes of glass material compared to the polishing removal rate to ensure complete damage removal. The secondary time driver has been poor convergence to final figure and the corresponding polish-metrology cycles. The overall cycle time and resultant cost due to labor, equipment utilization and shop efficiency is increased, often significantly, when the optical prescription is aspheric. In addition to the long polishing cycle times, the duration of the polishing time is often very difficult to predict given that current polishing processes are not deterministic processes. This paper will describe a novel approach to large optics finishing, relying on several innovative technologies to be presented and illustrated through a variety of examples. The cycle time reductions enabled by this approach promises to result in significant cost and lead-time reductions for large size optics. In addition, corresponding increases in throughput will provide for less capital expenditure per square meter of optic produced. This process, comparative cycles time estimates and preliminary results will be discussed.
Effects of an aft facing step on the surface of a laminar flow glider wing
NASA Technical Reports Server (NTRS)
Sandlin, Doral R.; Saiki, Neal
1993-01-01
A motor glider was used to perform a flight test study on the effects of aft facing steps in a laminar boundary layer. This study focuses on two dimensional aft facing steps oriented spanwise to the flow. The size and location of the aft facing steps were varied in order to determine the critical size that will force premature transition. Transition over a step was found to be primarily a function of Reynolds number based on step height. Both of the step height Reynolds numbers for premature and full transition were determined. A hot film anemometry system was used to detect transition.
Kinesin Steps Do Not Alternate in Size☆
Fehr, Adrian N.; Asbury, Charles L.; Block, Steven M.
2008-01-01
Abstract Kinesin is a two-headed motor protein that transports cargo inside cells by moving stepwise on microtubules. Its exact trajectory along the microtubule is unknown: alternative pathway models predict either uniform 8-nm steps or alternating 7- and 9-nm steps. By analyzing single-molecule stepping traces from “limping” kinesin molecules, we were able to distinguish alternate fast- and slow-phase steps and thereby to calculate the step sizes associated with the motions of each of the two heads. We also compiled step distances from nonlimping kinesin molecules and compared these distributions against models predicting uniform or alternating step sizes. In both cases, we find that kinesin takes uniform 8-nm steps, a result that strongly constrains the allowed models. PMID:18083906
NASA Astrophysics Data System (ADS)
Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2003-11-01
CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (
Effect of Pore Clogging on Kinetics of Lead Uptake by Clinoptilolite.
Inglezakis; Diamandis; Loizidou; Grigoropoulou
1999-07-01
The kinetics of lead-sodium ion exchange using pretreated natural clinoptilolite are investigated, more specifically the influence of agitation (0, 210, and 650 rpm) on the limiting step of the overall process, for particle sizes of 0.63-0.8 and 0.8-1 mm at ambient temperature and initial lead solutions of 500 mg l-1 without pH adjustment. The isotopic exchange model is found to fit the ion exchange process. Particle diffusion is shown to be the controlling step for both particle sizes under agitation, while in the absence of agitation film diffusion is shown to control. The ion exchange process effective diffusion coefficients are calculated and found to depend strongly on particle size in the case of agitation at 210 rpm and only slightly on particle size at 650 rpm. Lead uptake rates are higher for smaller particles only at rigorous agitation, while at mild agitation the results are reversed. These facts are due to partial clogging of the pores of the mineral during the grinding process. This is verified through comparison of lead uptake rates for two samples of the same particle size, one of which is rigorously washed for a certain time before being exposed to the ion exchange. Copyright 1999 Academic Press.
Cheviron, Perrine; Gouanvé, Fabrice; Espuche, Eliane
2014-08-08
Environmentally friendly silver nanocomposite films were prepared by an ex situ method consisting firstly in the preparation of colloidal silver dispersions and secondly in the dispersion of the as-prepared nanoparticles in a potato starch/glycerol matrix, keeping a green chemistry process all along the synthesis steps. In the first step concerned with the preparation of the colloidal silver dispersions, water, glucose and soluble starch were used as solvent, reducing agent and stabilizing agent, respectively. The influences of the glucose amount and reaction time were investigated on the size and size distribution of the silver nanoparticles. Two distinct silver nanoparticle populations in size (diameter around 5 nm size for the first one and from 20 to 50 nm for the second one) were distinguished and still highlighted in the potato starch/glycerol based nanocomposite films. It was remarkable that lower nanoparticle mean sizes were evidenced by both TEM and UV-vis analyses in the nanocomposites in comparison to the respective colloidal silver dispersions. A dispersion mechanism based on the potential interactions developed between the nanoparticles and the polymer matrix and on the polymer chain lengths was proposed to explain this morphology. These nanocomposite film series can be viewed as a promising candidate for many applications in antimicrobial packaging, biomedicines and sensors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling ultrasound propagation through material of increasing geometrical complexity.
Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen
2018-06-01
Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nguyen, Trung N.; Siegmund, Thomas; Tomar, Vikas; Kruzic, Jamie J.
2017-12-01
Size effects occur in non-uniform plastically deformed metals confined in a volume on the scale of micrometer or sub-micrometer. Such problems have been well studied using strain gradient rate-independent plasticity theories. Yet, plasticity theories describing the time-dependent behavior of metals in the presence of size effects are presently limited, and there is no consensus about how the size effects vary with strain rates or whether there is an interaction between them. This paper introduces a constitutive model which enables the analysis of complex load scenarios, including loading rate sensitivity, creep, relaxation and interactions thereof under the consideration of plastic strain gradient effects. A strain gradient viscoplasticity constitutive model based on the Kocks-Mecking theory of dislocation evolution, namely the strain gradient Kocks-Mecking (SG-KM) model, is established and allows one to capture both rate and size effects, and their interaction. A formulation of the model in the finite element analysis framework is derived. Numerical examples are presented. In a special virtual creep test with the presence of plastic strain gradients, creep rates are found to diminish with the specimen size, and are also found to depend on the loading rate in an initial ramp loading step. Stress relaxation in a solid medium containing cylindrical microvoids is predicted to increase with decreasing void radius and strain rate in a prior ramp loading step.
A road map to the new frontier: finding ETI
NASA Astrophysics Data System (ADS)
Bertaux, J. L.
2014-04-01
An obvious New Frontier for humanity is to locate our nearest neighbors technically advanced (ETI, extra-terrestrial intelligence). This quest can be achieved with three steps. 1. find the nearest exoplanets in the habitable zone (HZ) 2. find biosignatures in their spectra 3. find signs of advance technology. We argue that steps 2 and 3 will require space telescopes that need to be oriented to targets already identified in step 1 as hosting exoplanets of Earth or super Earth size in the habitable zone. We show that non-transiting planets in HZ are 3 to 9 times nearer the sun than transiting planets, the gain factor being a function of star temperature. The requirement for step 1 is within the reach of a network of 2.5 m diameter ground-based automated telescopes associated with HARPS-type spectrometers.
Droplet size and velocity distributions for spray modelling
NASA Astrophysics Data System (ADS)
Jones, D. P.; Watkins, A. P.
2012-01-01
Methods for constructing droplet size distributions and droplet velocity profiles are examined as a basis for the Eulerian spray model proposed in Beck and Watkins (2002,2003) [5,6]. Within the spray model, both distributions must be calculated at every control volume at every time-step where the spray is present and valid distributions must be guaranteed. Results show that the Maximum Entropy formalism combined with the Gamma distribution satisfy these conditions for the droplet size distributions. Approximating the droplet velocity profile is shown to be considerably more difficult due to the fact that it does not have compact support. An exponential model with a constrained exponent offers plausible profiles.
Evaluation of a High-Resolution Benchtop Micro-CT Scanner for Application in Porous Media Research
NASA Astrophysics Data System (ADS)
Tuller, M.; Vaz, C. M.; Lasso, P. O.; Kulkarni, R.; Ferre, T. A.
2010-12-01
Recent advances in Micro Computed Tomography (MCT) provided the motivation to thoroughly evaluate and optimize scanning, image reconstruction/segmentation and pore-space analysis capabilities of a new generation benchtop MCT scanner and associated software package. To demonstrate applicability to soil research the project was focused on determination of porosities and pore size distributions of two Brazilian Oxisols from segmented MCT-data. Effects of metal filters and various acquisition parameters (e.g. total rotation, rotation step, and radiograph frame averaging) on image quality and acquisition time are evaluated. Impacts of sample size and scanning resolution on CT-derived porosities and pore-size distributions are illustrated.
Role of transient water pressure in quarrying: A subglacial experiment using acoustic emissions
Cohen, D.; Hooyer, T.S.; Iverson, N.R.; Thomason, J.F.; Jackson, M.
2006-01-01
Probably the most important mechanism of glacial erosion is quarrying: the growth and coalescence of cracks in subglacial bedrock and dislodgement of resultant rock fragments. Although evidence indicates that erosion rates depend on sliding speed, rates of crack growth in bedrock may be enhanced by changing stresses on the bed caused by fluctuating basal water pressure in zones of ice-bed separation. To study quarrying in real time, a granite step, 12 cm high with a crack in its stoss surface, was installed at the bed of Engabreen, Norway. Acoustic emission sensors monitored crack growth events in the step as ice slid over it. Vertical stresses, water pressure, and cavity height in the lee of the step were also measured. Water was pumped to the lee of the step several times over 8 days. Pumping initially caused opening of a leeward cavity, which then closed after pumping was stopped and water pressure decreased. During cavity closure, acoustic emissions emanating mostly from the vicinity of the base of the crack in the step increased dramatically. With repeated pump tests this crack grew with time until the step's lee surface was quarried. Our experiments indicate that fluctuating water pressure caused stress thresholds required for crack growth to be exceeded. Natural basal water pressure fluctuations should also concentrate stresses on rock steps, increasing rates of crack growth. Stress changes on the bed due to water pressure fluctuations will increase in magnitude and duration with cavity size, which may help explain the effect of sliding speed on erosion rates. Copyright 2006 by the American Geophysical Union.
The Case for Sustainable Laboratories: First Steps at Harvard University
ERIC Educational Resources Information Center
Woolliams, Jessica; Lloyd, Matthew; Spengler, John D.
2005-01-01
Purpose: Laboratories typically consume 4-5 times more energy than similarly-sized commercial space. This paper adds to a growing dialogue about how to "green" a laboratory's design and operations. Design/methodology/approach: The paper is divided into three sections. The first section reviews the background and theoretical issues. A…
INTERA Environmental Consultants, Inc.
1979-01-01
The major limitation of the model arises using second-order correct (central-difference) finite-difference approximation in space. To avoid numerical oscillations in the solution, the user must restrict grid block and time step sizes depending upon the magnitude of the dispersivity.
Malacrida, Leonel; Astrada, Soledad; Briva, Arturo; Bollati-Fogolín, Mariela; Gratton, Enrico; Bagatolli, Luis A
2016-11-01
Using LAURDAN spectral imaging and spectral phasor analysis we concurrently studied the growth and hydration state of subcellular organelles (lamellar body-like, LB-like) from live A549 lung cancer cells at different post-confluence days. Our results reveal a time dependent two-step process governing the size and hydration of these intracellular LB-like structures. Specifically, a first step (days 1 to 7) is characterized by an increase in their size, followed by a second one (days 7 to 14) where the organelles display a decrease in their global hydration properties. Interestingly, our results also show that their hydration properties significantly differ from those observed in well-characterized artificial lamellar model membranes, challenging the notion that a pure lamellar membrane organization is present in these organelles at intracellular conditions. Finally, these LB-like structures show a significant increase in their hydration state upon secretion, suggesting a relevant role of entropy during this process. Copyright © 2016 Elsevier B.V. All rights reserved.
Critical motor number for fractional steps of cytoskeletal filaments in gliding assays.
Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan
2012-01-01
In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number N of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, N(c). Because of thermal fluctuations, fractional filament steps are only detectable as long as N < N(c). The corresponding fractional filament step size is l/N where l is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be N(c) = 4, and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number N(c) depends on the elastic stalk properties and is reduced to N(c) = 3 for linear springs with a nonzero rest length. Furthermore, N(c) is shown to depend quadratically on the motor step size l. Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number N = 31. Finally, we show that fractional filament steps are also detectable for a fixed average motor number
Aging effect on step adjustments and stability control in visually perturbed gait initiation.
Sun, Ruopeng; Cui, Chuyi; Shea, John B
2017-10-01
Gait adaptability is essential for fall avoidance during locomotion. It requires the ability to rapidly inhibit original motor planning, select and execute alternative motor commands, while also maintaining the stability of locomotion. This study investigated the aging effect on gait adaptability and dynamic stability control during a visually perturbed gait initiation task. A novel approach was used such that the anticipatory postural adjustment (APA) during gait initiation were used to trigger the unpredictable relocation of a foot-size stepping target. Participants (10 young adults and 10 older adults) completed visually perturbed gait initiation in three adjustment timing conditions (early, intermediate, late; all extracted from the stereotypical APA pattern) and two adjustment direction conditions (medial, lateral). Stepping accuracy, foot rotation at landing, and Margin of Dynamic Stability (MDS) were analyzed and compared across test conditions and groups using a linear mixed model. Stepping accuracy decreased as a function of adjustment timing as well as stepping direction, with older subjects exhibited a significantly greater undershoot in foot placement to late lateral stepping. Late adjustment also elicited a reaching-like movement (i.e. foot rotation prior to landing in order to step on the target), regardless of stepping direction. MDS measures in the medial-lateral and anterior-posterior direction revealed both young and older adults exhibited reduced stability in the adjustment step and subsequent steps. However, young adults returned to stable gait faster than older adults. These findings could be useful for future study of screening deficits in gait adaptability and preventing falls. Copyright © 2017 Elsevier B.V. All rights reserved.
Microstructure of room temperature ionic liquids at stepped graphite electrodes
Feng, Guang; Li, Song; Zhao, Wei; ...
2015-07-14
Molecular dynamics simulations of room temperature ionic liquid (RTIL) [emim][TFSI] at stepped graphite electrodes were performed to investigate the influence of the thickness of the electrode surface step on the microstructure of interfacial RTILs. A strong correlation was observed between the interfacial RTIL structure and the step thickness in electrode surface as well as the ion size. Specifically, when the step thickness is commensurate with ion size, the interfacial layering of cation/anion is more evident; whereas, the layering tends to be less defined when the step thickness is close to the half of ion size. Furthermore, two-dimensional microstructure of ionmore » layers exhibits different patterns and alignments of counter-ion/co-ion lattice at neutral and charged electrodes. As the cation/anion layering could impose considerable effects on ion diffusion, the detailed information of interfacial RTILs at stepped graphite presented here would help to understand the molecular mechanism of RTIL-electrode interfaces in supercapacitors.« less
NASA Astrophysics Data System (ADS)
Martínez de Yuso, Alicia; Le Meins, Jean-Marc; Oumellal, Yassine; Paul-Boncour, Valérie; Zlotea, Claudia; Matei Ghimbeu, Camelia
2016-12-01
An easy and rapid one-pot microwave-assisted soft-template synthesis method for the preparation of Pd-Ni nanoalloys confined in mesoporous carbon is reported. This approach allows the formation of mesoporous carbon and the growth of the particles at the same time, under short microwave irradiation (4 h) compared to the several days spent for the classical approach. In addition, the synthesis steps are diminished and no thermopolymerization step or reduction treatment being required. The influence of the Pd-Ni composition on the particle size and on the carbon characteristics was investigated. Pd-Ni solid solutions in the whole composition range could be obtained, and the metallic composition proved to have an important effect on the nanoparticle size but low influence on carbon textural properties. Small and uniformly distributed nanoparticles were confined in mesoporous carbon with uniform pore size distribution, and dependence between the nanoparticle size and the nanoalloy composition was observed, i.e., increase of the particle size with increasing the Ni content (from 5 to 14 nm). The magnetic properties of the materials showed a strong nanoparticle size and/or composition effect. The blocking temperature of Pd-Ni nanoalloys increases with the increase of Ni amount and therefore of particle size. The magnetization values are smaller than the bulk counterpart particularly for the Ni-rich compositions due to the formed graphitic shells surrounding the particles inducing a dead magnetic layer.
NASA Astrophysics Data System (ADS)
Liu, N.; Li, M.; Liu, L.; Yang, Y.; Mai, J.; Pu, H.; Sun, Y.; Li, W. J.
2018-02-01
The customized fabrication of microelectrodes from gold nanoparticles (AuNPs) has attracted much attention due to their numerous applications in chemistry and biomedical engineering, such as for surface-enhanced Raman spectroscopy (SERS) and as catalyst sites for electrochemistry. Herein, we present a novel optically-induced electrodeposition (OED) method for rapidly fabricating gold electrodes which are also surface-modified with nanoparticles in one single step. The electrodeposition mechanism, with respect to the applied AC voltage signal and the elapsed deposition time, on the resulting morphology and particle sizes was investigated. The results from SEM and AFM analysis demonstrated that 80-200 nm gold particles can be formed on the surface of the gold electrodes. Simultaneously, both the size of the nanoparticles and the roughness of the fabricated electrodes can be regulated by the deposition time. Compared to state-of-the-art methods for fabricating microelectrodes with AuNPs, such as nano-seed-mediated growth and conventional electrodeposition, this OED technique has several advantages including: (1) electrode fabrication and surface modification using nanoparticles are completed in a single step, eliminating the need for prefabricating micro electrodes; (2) the patterning of electrodes is defined using a digitally-customized, projected optical image rather than using fixed physical masks; and (3) both the fabrication and surface modification processes are rapid, and the entire fabrication process only requires less than 6 s.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
NASA Astrophysics Data System (ADS)
Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.
2014-04-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Solid Hydrogen Experiments for Atomic Propellants
NASA Technical Reports Server (NTRS)
Palaszewski, Bryan
2001-01-01
This paper illustrates experiments that were conducted on the formation of solid hydrogen particles in liquid helium. Solid particles of hydrogen were frozen in liquid helium, and observed with a video camera. The solid hydrogen particle sizes, their molecular structure transitions, and their agglomeration times were estimated. article sizes of 1.8 to 4.6 mm (0.07 to 0. 18 in.) were measured. The particle agglomeration times were 0.5 to 11 min, depending on the loading of particles in the dewar. These experiments are the first step toward visually characterizing these particles, and allow designers to understand what issues must be addressed in atomic propellant feed system designs for future aerospace vehicles.
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
Kinetics of spontaneous filament nucleation via oligomers: Insights from theory and simulation
NASA Astrophysics Data System (ADS)
Šarić, Andela; Michaels, Thomas C. T.; Zaccone, Alessio; Knowles, Tuomas P. J.; Frenkel, Daan
2016-12-01
Nucleation processes are at the heart of a large number of phenomena, from cloud formation to protein crystallization. A recently emerging area where nucleation is highly relevant is the initiation of filamentous protein self-assembly, a process that has broad implications in many research areas ranging from medicine to nanotechnology. As such, spontaneous nucleation of protein fibrils has received much attention in recent years with many theoretical and experimental studies focussing on the underlying physical principles. In this paper we make a step forward in this direction and explore the early time behaviour of filamentous protein growth in the context of nucleation theory. We first provide an overview of the thermodynamics and kinetics of spontaneous nucleation of protein filaments in the presence of one relevant degree of freedom, namely the cluster size. In this case, we review how key kinetic observables, such as the reaction order of spontaneous nucleation, are directly related to the physical size of the critical nucleus. We then focus on the increasingly prominent case of filament nucleation that includes a conformational conversion of the nucleating building-block as an additional slow step in the nucleation process. Using computer simulations, we study the concentration dependence of the nucleation rate. We find that, under these circumstances, the reaction order of spontaneous nucleation with respect to the free monomer does no longer relate to the overall physical size of the nucleating aggregate but rather to the portion of the aggregate that actively participates in the conformational conversion. Our results thus provide a novel interpretation of the common kinetic descriptors of protein filament formation, including the reaction order of the nucleation step or the scaling exponent of lag times, and put into perspective current theoretical descriptions of protein aggregation.
Kinetics of spontaneous filament nucleation via oligomers: Insights from theory and simulation.
Šarić, Anđela; Michaels, Thomas C T; Zaccone, Alessio; Knowles, Tuomas P J; Frenkel, Daan
2016-12-07
Nucleation processes are at the heart of a large number of phenomena, from cloud formation to protein crystallization. A recently emerging area where nucleation is highly relevant is the initiation of filamentous protein self-assembly, a process that has broad implications in many research areas ranging from medicine to nanotechnology. As such, spontaneous nucleation of protein fibrils has received much attention in recent years with many theoretical and experimental studies focussing on the underlying physical principles. In this paper we make a step forward in this direction and explore the early time behaviour of filamentous protein growth in the context of nucleation theory. We first provide an overview of the thermodynamics and kinetics of spontaneous nucleation of protein filaments in the presence of one relevant degree of freedom, namely the cluster size. In this case, we review how key kinetic observables, such as the reaction order of spontaneous nucleation, are directly related to the physical size of the critical nucleus. We then focus on the increasingly prominent case of filament nucleation that includes a conformational conversion of the nucleating building-block as an additional slow step in the nucleation process. Using computer simulations, we study the concentration dependence of the nucleation rate. We find that, under these circumstances, the reaction order of spontaneous nucleation with respect to the free monomer does no longer relate to the overall physical size of the nucleating aggregate but rather to the portion of the aggregate that actively participates in the conformational conversion. Our results thus provide a novel interpretation of the common kinetic descriptors of protein filament formation, including the reaction order of the nucleation step or the scaling exponent of lag times, and put into perspective current theoretical descriptions of protein aggregation.
Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.
2011-01-01
For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562
Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew
2017-12-01
Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.
Grain size effect on Lcr elastic wave for surface stress measurement of carbon steel
NASA Astrophysics Data System (ADS)
Liu, Bin; Miao, Wenbing; Dong, Shiyun; He, Peng
2018-04-01
Based on critical refraction longitudinal wave (Lcr wave) acoustoelastic theory, correction method for grain size effect on surface stress measurement was discussed in this paper. Two fixed distance Lcr wave transducers were used to collect Lcr wave, and difference in time of flight between Lcr waves was calculated with cross-correlation coefficient function, at last relationship of Lcr wave acoustoelastic coefficient and grain size was obtained. Results show that as grain size increases, propagation velocity of Lcr wave decreases, one cycle is optimal step length for calculating difference in time of flight between Lcr wave. When stress value is within stress turning point, relationship of difference in time of flight between Lcr wave and stress is basically consistent with Lcr wave acoustoelastic theory, while there is a deviation and it is higher gradually as stress increasing. Inhomogeneous elastic plastic deformation because of inhomogeneous microstructure and average value of surface stress in a fixed distance measured with Lcr wave were considered as the two main reasons for above results. As grain size increasing, Lcr wave acoustoelastic coefficient decreases in the form of power function, then correction method for grain size effect on surface stress measurement was proposed. Finally, theoretical discussion was verified by fracture morphology observation.
Huang, Edward Pei-Chuan; Wang, Hui-Chih; Ko, Patrick Chow-In; Chang, Anna Marie; Fu, Chia-Ming; Chen, Jiun-Wei; Liao, Yen-Chen; Liu, Hung-Chieh; Fang, Yao-De; Yang, Chih-Wei; Chiang, Wen-Chu; Ma, Matthew Huei-Ming; Chen, Shyr-Chyr
2013-09-01
The quality of cardiopulmonary resuscitation (CPR) is important to survival after cardiac arrest. Mechanical devices (MD) provide constant CPR, but their effectiveness may be affected by deployment timeliness. To identify the timeliness of the overall and of each essential step in the deployment of a piston-type MD during emergency department (ED) resuscitation, and to identify factors associated with delayed MD deployment by video recordings. Between December 2005 and December 2008, video clips from resuscitations with CPR sessions using a MD in the ED were reviewed using time-motion analyses. The overall deployment timeliness and the time spent on each essential step of deployment were measured. There were 37 CPR recordings that used a MD. Deployment of MD took an average 122.6 ± 57.8s. The 3 most time-consuming steps were: (1) setting the device (57.8 ± 38.3s), (2) positioning the patient (33.4 ± 38.0 s), and (3) positioning the device (14.7 ± 9.5s). Total no flow time was 89.1 ± 41.2s (72.7% of total time) and associated with the 3 most time-consuming steps. There was no difference in the total timeliness, no-flow time, and no-flow ratio between different rescuer numbers, time of day of the resuscitation, or body size of patients. Rescuers spent a significant amount of time on MD deployment, leading to long no-flow times. Lack of familiarity with the device and positioning strategy were associated with poor performance. Additional training in device deployment strategies are required to improve the benefits of mechanical CPR. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lehmkuhl, John F.
1984-03-01
The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.
Linear micromechanical stepping drive for pinhole array positioning
NASA Astrophysics Data System (ADS)
Endrödy, Csaba; Mehner, Hannes; Grewe, Adrian; Hoffmann, Martin
2015-05-01
A compact linear micromechanical stepping drive for positioning a 7 × 5.5 mm2 optical pinhole array is presented. The system features a step size of 13.2 µm and a full displacement range of 200 µm. The electrostatic inch-worm stepping mechanism shows a compact design capable of positioning a payload 50% of its own weight. The stepping drive movement, step sizes and position accuracy are characterized. The actuated pinhole array is integrated in a confocal chromatic hyperspectral imaging system, where coverage of the object plane, and therefore the useful picture data, can be multiplied by 14 in contrast to a non-actuated array.
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
NASA Astrophysics Data System (ADS)
Bedi, Amrit Singh; Rajawat, Ketan
2018-05-01
Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.
Variable aperture-based ptychographical iterative engine method.
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Growth Evolution and Characterization of PLD Zn(Mg)O Nanowire Arrays
NASA Astrophysics Data System (ADS)
Rahm, Andreas; Nobis, Thomas; Lorenz, Michael; Zimmermann, Gregor; Boukos, Nikos; Travlos, Anastasios; Grundmann, Marius
ZnO and Zn0.98Mg0.02O nanowires have been grown by high-pressure pulsed laser deposition on sapphire substrates covered with gold colloidal particles as nucleation sites. We present a detailed study of the nanowire size and length distribution and of the growth evolution. We find that the aspect ratio varies linearly with deposition time. The linearity coefficient is independent of the catalytic gold particle size and lateral nanowire density. The superior structural quality of the whiskers is proven by X-ray diffraction and transmission electron microscopy. The defect-free ZnO nanowires exhibit a FWHM(2θ-ω) of the ZnO(0002) reflection of 22 arcsec. We show (0-11) step habit planes on the side faces of the nanowires that are a few atomic steps in height. The microscopic homogeneity of the optical properties is confirmed by temperature-dependent cathodoluminescence.
Muñoz, R.; Munuera, C.; Martínez, J. I.; Azpeitia, J.; Gómez-Aleixandre, C.; García-Hernández, M.
2016-01-01
Direct growth of graphene films on dielectric substrates (quartz and silica) is reported, by means of remote electron cyclotron resonance plasma assisted chemical vapor deposition r-(ECR-CVD) at low temperature (650°C). Using a two step deposition process- nucleation and growth- by changing the partial pressure of the gas precursors at constant temperature, mostly monolayer continuous films, with grain sizes up to 500 nm are grown, exhibiting transmittance larger than 92% and sheet resistance as low as 900 Ω·sq-1. The grain size and nucleation density of the resulting graphene sheets can be controlled varying the deposition time and pressure. In additon, first-principles DFT-based calculations have been carried out in order to rationalize the oxygen reduction in the quartz surface experimentally observed. This method is easily scalable and avoids damaging and expensive transfer steps of graphene films, improving compatibility with current fabrication technologies. PMID:28070341
Fabrication of Size-Tunable Metallic Nanoparticles Using Plasmid DNA as a Biomolecular Reactor
Samson, Jacopo; Piscopo, Irene; Yampolski, Alex; Nahirney, Patrick; Parpas, Andrea; Aggarwal, Amit; Saleh, Raihan; Drain, Charles Michael
2011-01-01
Plasmid DNA can be used as a template to yield gold, palladium, silver, and chromium nanoparticles of different sizes based on variations in incubation time at 70 °C with gold phosphine complexes, with the acetates of silver or palladium, or chromium acetylacetonate. The employment of mild synthetic conditions, minimal procedural steps, and aqueous solvents makes this method environmentally greener and ensures general feasibility. The use of plasmids exploits the capabilities of the biotechnology industry as a source of nanoreactor materials. PMID:28348280
A facile single-step synthesis of ternary multicore magneto-plasmonic nanoparticles.
Benelmekki, Maria; Bohra, Murtaza; Kim, Jeong-Hwan; Diaz, Rosa E; Vernieres, Jerome; Grammatikopoulos, Panagiotis; Sowwan, Mukhles
2014-04-07
We report a facile single-step synthesis of ternary hybrid nanoparticles (NPs) composed of multiple dumbbell-like iron-silver (FeAg) cores encapsulated by a silicon (Si) shell using a versatile co-sputter gas-condensation technique. In comparison to previously reported binary magneto-plasmonic NPs, the advantage conferred by a Si shell is to bind the multiple magneto-plasmonic (FeAg) cores together and prevent them from aggregation at the same time. Further, we demonstrate that the size of the NPs and number of cores in each NP can be modulated over a wide range by tuning the experimental parameters.
NASA Astrophysics Data System (ADS)
Bahtiar, A.; Rahmanita, S.; Inayatie, Y. D.
2017-05-01
Morphology of perovskite film is a key important for achieving high performance perovskite solar cells. Perovskite films are commonly prepared by two-step spin-coating method. However, pin-holes are frequently formed in perovskite films due to incomplete conversion of lead-iodide (PbI2) into perovskite CH3NH3PbI3. Pin-holes in perovskite film cause large hysteresis in current-voltage curve of solar cells due to large series resistance between perovskite layer-hole transport material. Moreover, crystal structure and grain size of perovskite crystal are also other important parameters for achieving high performance solar cells, which are significantly affected by preparation of perovskite film. We studied the effect of preparation of perovskite film using controlled spin-coating parameters on crystal structure and morphological properties of perovskite film. We used two-step spin-coating method for preparation of perovskite film with varied spinning speed, spinning time and temperature of spin-coating process to control growth of perovskite crystal aimed to produce high quality perovskite crystal with pin-hole free and large grain size. All experiment was performed in air with high humidity (larger than 80%). The best crystal structure, pin-hole free with large grain crystal size of perovskite film was obtained from film prepared at room temperature with spinning speed 1000 rpm for 20 seconds and annealed at 100°C for 300 seconds.
Statistical Modeling of Robotic Random Walks on Different Terrain
NASA Astrophysics Data System (ADS)
Naylor, Austin; Kinnaman, Laura
Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Rare events in stochastic populations under bursty reproduction
NASA Astrophysics Data System (ADS)
Be'er, Shay; Assaf, Michael
2016-11-01
Recently, a first step was made by the authors towards a systematic investigation of the effect of reaction-step-size noise—uncertainty in the step size of the reaction—on the dynamics of stochastic populations. This was done by investigating the effect of bursty influx on the switching dynamics of stochastic populations. Here we extend this formalism to account for bursty reproduction processes, and improve the accuracy of the formalism to include subleading-order corrections. Bursty reproduction appears in various contexts, where notable examples include bursty viral production from infected cells, and reproduction of mammals involving varying number of offspring. The main question we quantitatively address is how bursty reproduction affects the overall fate of the population. We consider two complementary scenarios: population extinction and population survival; in the former a population gets extinct after maintaining a long-lived metastable state, whereas in the latter a population proliferates despite undergoing a deterministic drift towards extinction. In both models reproduction occurs in bursts, sampled from an arbitrary distribution. Using the WKB approach, we show in the extinction problem that bursty reproduction broadens the quasi-stationary distribution of population sizes in the metastable state, which results in a drastic reduction of the mean time to extinction compared to the non-bursty case. In the survival problem, it is shown that bursty reproduction drastically increases the survival probability of the population. Close to the bifurcation limit our analytical results simplify considerably and are shown to depend solely on the mean and variance of the burst-size distribution. Our formalism is demonstrated on several realistic distributions which all compare well with numerical Monte-Carlo simulations.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2003-01-01
A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.
Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J
2014-01-01
We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840
Numerical solution methods for viscoelastic orthotropic materials
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1988-01-01
Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.
How many steps/day are enough? For older adults and special populations
2011-01-01
Older adults and special populations (living with disability and/or chronic illness that may limit mobility and/or physical endurance) can benefit from practicing a more physically active lifestyle, typically by increasing ambulatory activity. Step counting devices (accelerometers and pedometers) offer an opportunity to monitor daily ambulatory activity; however, an appropriate translation of public health guidelines in terms of steps/day is unknown. Therefore this review was conducted to translate public health recommendations in terms of steps/day. Normative data indicates that 1) healthy older adults average 2,000-9,000 steps/day, and 2) special populations average 1,200-8,800 steps/day. Pedometer-based interventions in older adults and special populations elicit a weighted increase of approximately 775 steps/day (or an effect size of 0.26) and 2,215 steps/day (or an effect size of 0.67), respectively. There is no evidence to inform a moderate intensity cadence (i.e., steps/minute) in older adults at this time. However, using the adult cadence of 100 steps/minute to demark the lower end of an absolutely-defined moderate intensity (i.e., 3 METs), and multiplying this by 30 minutes produces a reasonable heuristic (i.e., guiding) value of 3,000 steps. However, this cadence may be unattainable in some frail/diseased populations. Regardless, to truly translate public health guidelines, these steps should be taken over and above activities performed in the course of daily living, be of at least moderate intensity accumulated in minimally 10 minute bouts, and add up to at least 150 minutes over the week. Considering a daily background of 5,000 steps/day (which may actually be too high for some older adults and/or special populations), a computed translation approximates 8,000 steps on days that include a target of achieving 30 minutes of moderate-to-vigorous physical activity (MVPA), and approximately 7,100 steps/day if averaged over a week. Measured directly and including these background activities, the evidence suggests that 30 minutes of daily MVPA accumulated in addition to habitual daily activities in healthy older adults is equivalent to taking approximately 7,000-10,000 steps/day. Those living with disability and/or chronic illness (that limits mobility and or/physical endurance) display lower levels of background daily activity, and this will affect whole-day estimates of recommended physical activity. PMID:21798044
Uddin, Rokon; Burger, Robert; Donolato, Marco; Fock, Jeppe; Creagh, Michael; Hansen, Mikkel Fougt; Boisen, Anja
2016-11-15
We present a biosensing platform for the detection of proteins based on agglutination of aptamer coated magnetic nano- or microbeads. The assay, from sample to answer, is integrated on an automated, low-cost microfluidic disc platform. This ensures fast and reliable results due to a minimum of manual steps involved. The detection of the target protein was achieved in two ways: (1) optomagnetic readout using magnetic nanobeads (MNBs); (2) optical imaging using magnetic microbeads (MMBs). The optomagnetic readout of agglutination is based on optical measurement of the dynamics of MNB aggregates whereas the imaging method is based on direct visualization and quantification of the average size of MMB aggregates. By enhancing magnetic particle agglutination via application of strong magnetic field pulses, we obtained identical limits of detection of 25pM with the same sample-to-answer time (15min 30s) using the two differently sized beads for the two detection methods. In both cases a sample volume of only 10µl is required. The demonstrated automation, low sample-to-answer time and portability of both detection instruments as well as integration of the assay on a low-cost disc are important steps for the implementation of these as portable tools in an out-of-lab setting. Copyright © 2016 Elsevier B.V. All rights reserved.
Ali, Ghafar; Ahmad, Maqsood; Akhter, Javed Iqbal; Maqbool, Muhammad; Cho, Sung Oh
2010-08-01
A simple approach for the growth of long-range highly ordered nanoporous anodic alumina film in H(2)SO(4) electrolyte through a single step anodization without any additional pre-anodizing procedure is reported. Free-standing porous anodic alumina film of 180 microm thickness with through hole morphology was obtained. A simple and single step process was used for the detachment of alumina from aluminum substrate. The effect of anodizing conditions, such as anodizing voltage and time on the pore diameter and pore ordering is discussed. The metal/oxide and oxide/electrolyte interfaces were examined by high resolution scanning transmission electron microscope. The arrangement of pores on metal/oxide interface was well ordered with smaller diameters than that of the oxide/electrolyte interface. The inter-pore distance was larger in metal/oxide interface as compared to the oxide/electrolyte interface. The size of the ordered domain was found to depend strongly upon anodizing voltage and time. (c) 2010 Elsevier Ltd. All rights reserved.
A diffusive information preservation method for small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2013-06-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
Ultrafast learning in a hard-limited neural network pattern recognizer
NASA Astrophysics Data System (ADS)
Hu, Chia-Lun J.
1996-03-01
As we published in the last five years, the supervised learning in a hard-limited perceptron system can be accomplished in a noniterative manner if the input-output mapping to be learned satisfies a certain positive-linear-independency (or PLI) condition. When this condition is satisfied (for most practical pattern recognition applications, this condition should be satisfied,) the connection matrix required to meet this mapping can be obtained noniteratively in one step. Generally, there exist infinitively many solutions for the connection matrix when the PLI condition is satisfied. We can then select an optimum solution such that the recognition of any untrained patterns will become optimally robust in the recognition mode. The learning speed is very fast and close to real-time because the learning process is noniterative and one-step. This paper reports the theoretical analysis and the design of a practical charter recognition system for recognizing hand-written alphabets. The experimental result is recorded in real-time on an unedited video tape for demonstration purposes. It is seen from this real-time movie that the recognition of the untrained hand-written alphabets is invariant to size, location, orientation, and writing sequence, even the training is done with standard size, standard orientation, central location and standard writing sequence.
NASA Astrophysics Data System (ADS)
Taurino, Irene; Sanzó, Gabriella; Mazzei, Franco; Favero, Gabriele; de Micheli, Giovanni; Carrara, Sandro
2015-10-01
Novel methods to obtain Pt nanostructured electrodes have raised particular interest due to their high performance in electrochemistry. Several nanostructuration methods proposed in the literature use costly and bulky equipment or are time-consuming due to the numerous steps they involve. Here, Pt nanostructures were produced for the first time by one-step template-free electrodeposition on Pt bare electrodes. The change in size and shape of the nanostructures is proven to be dependent on the deposition parameters and on the ratio between sulphuric acid and chloride-complexes (i.e., hexachloroplatinate or tetrachloroplatinate). To further improve the electrochemical properties of electrodes, depositions of Pt nanostructures on previously synthesised Pt nanostructures are also performed. The electroactive surface areas exhibit a two order of magnitude improvement when Pt nanostructures with the smallest size are used. All the biosensors based on Pt nanostructures and immobilised glucose oxidase display higher sensitivity as compared to bare Pt electrodes. Pt nanostructures retained an excellent electrocatalytic activity towards the direct oxidation of glucose. Finally, the nanodeposits were proven to be an excellent solid contact for ion measurements, significantly improving the time-stability of the potential. The use of these new nanostructured coatings in electrochemical sensors opens new perspectives for multipanel monitoring of human metabolism.
Finite size effects in epidemic spreading: the problem of overpopulated systems
NASA Astrophysics Data System (ADS)
Ganczarek, Wojciech
2013-12-01
In this paper we analyze the impact of network size on the dynamics of epidemic spreading. In particular, we investigate the pace of infection in overpopulated systems. In order to do that, we design a model for epidemic spreading on a finite complex network with a restriction to at most one contamination per time step, which can serve as a model for sexually transmitted diseases spreading in some student communes. Because of the highly discrete character of the process, the analysis cannot use the continuous approximation widely exploited for most models. Using a discrete approach, we investigate the epidemic threshold and the quasi-stationary distribution. The main results are two theorems about the mixing time for the process: it scales like the logarithm of the network size and it is proportional to the inverse of the distance from the epidemic threshold.
Rapid prototyping of compliant human aortic roots for assessment of valved stents.
Kalejs, Martins; von Segesser, Ludwig Karl
2009-02-01
Adequate in-vitro training in valved stents deployment as well as testing of the latter devices requires compliant real-size models of the human aortic root. The casting methods utilized up to now are multi-step, time consuming and complicated. We pursued a goal of building a flexible 3D model in a single-step procedure. We created a precise 3D CAD model of a human aortic root using previously published anatomical and geometrical data and printed it using a novel rapid prototyping system developed by the Fab@Home project. As a material for 3D fabrication we used common house-hold silicone and afterwards dip-coated several models with dispersion silicone one or two times. To assess the production precision we compared the size of the final product with the CAD model. Compliance of the models was measured and compared with native porcine aortic root. Total fabrication time was 3 h and 20 min. Dip-coating one or two times with dispersion silicone if applied took one or two extra days, respectively. The error in dimensions of non-coated aortic root model compared to the CAD design was <3.0% along X, Y-axes and 4.1% along Z-axis. Compliance of a non-coated model as judged by the changes of radius values in the radial direction by 16.39% is significantly different (P<0.001) from native aortic tissue--23.54% at the pressure of 80-100 mmHg. Rapid prototyping of compliant, life-size anatomical models with the Fab@Home 3D printer is feasible--it is very quick compared to previous casting methods.
Detection limits for nanoparticles in solution with classical turbidity spectra
NASA Astrophysics Data System (ADS)
Le Blevennec, G.
2013-09-01
Detection of nanoparticles in solution is required to manage safety and environmental problems. Spectral transmission turbidity method has now been known for a long time. It is derived from the Mie Theory and can be applied to any number of spheres, randomly distributed and separated by large distance compared to wavelength. Here, we describe a method for determination of size, distribution and concentration of nanoparticles in solution using UV-Vis transmission measurements. The method combines Mie and Beer Lambert computation integrated in a best fit approximation. In a first step, a validation of the approach is completed on silver nanoparticles solution. Verification of results is realized with Transmission Electronic Microscopy measurements for size distribution and an Inductively Coupled Plasma Mass Spectrometry for concentration. In view of the good agreement obtained, a second step of work focuses on how to manage the concentration to be the most accurate on the size distribution. Those efficient conditions are determined by simple computation. As we are dealing with nanoparticles, one of the key points is to know what the size limits reachable are with that kind of approach based on classical electromagnetism. In taking into account the transmission spectrometer accuracy limit we determine for several types of materials, metals, dielectrics, semiconductors the particle size limit detectable by such a turbidity method. These surprising results are situated at the quantum physics frontier.
Lim, Jun Yeul; Lim, Dae Gon; Kim, Ki Hyun; Park, Sang-Koo; Jeong, Seong Hoon
2018-02-01
Effects of annealing steps during the freeze drying process on etanercept, model protein, were evaluated using various analytical methods. The annealing was introduced in three different ways depending on time and temperature. Residual water contents of dried cakes varied from 2.91% to 6.39% and decreased when the annealing step was adopted, suggesting that they are directly affected by the freeze drying methods Moreover, the samples were more homogenous when annealing was adopted. Transition temperatures of the excipients (sucrose, mannitol, and glycine) were dependent on the freeze drying steps. Size exclusion chromatography showed that monomer contents were high when annealing was adopted and also they decreased less after thermal storage at 60°C. Dynamic light scattering results exhibited that annealing can be helpful in inhibiting aggregation and that thermal storage of freeze-dried samples preferably induced fragmentation over aggregation. Shift of circular dichroism spectrum and of the contents of etanercept secondary structure was observed with different freeze drying steps and thermal storage conditions. All analytical results suggest that the physicochemical properties of etanercept formulation can differ in response to different freeze drying steps and that annealing is beneficial for maintaining stability of protein and reducing the time of freeze drying process. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantization improves stabilization of dynamical systems with delayed feedback
NASA Astrophysics Data System (ADS)
Stepan, Gabor; Milton, John G.; Insperger, Tamas
2017-11-01
We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.
Audiovisual integration increases the intentional step synchronization of side-by-side walkers.
Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A
2017-12-01
When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.
On geological interpretations of crystal size distributions: Constant vs. proportionate growth
Eberl, D.D.; Kile, D.E.; Drits, V.A.
2002-01-01
Geological interpretations of crystal size distributions (CSDs) depend on understanding the crystal growth laws that generated the distributions. Most descriptions of crystal growth, including a population-balance modeling equation that is widely used in petrology, assume that crystal growth rates at any particular time are identical for all crystals, and, therefore, independent of crystal size. This type of growth under constant conditions can be modeled by adding a constant length to the diameter of each crystal for each time step. This growth equation is unlikely to be correct for most mineral systems because it neither generates nor maintains the shapes of lognormal CSDs, which are among the most common types of CSDs observed in rocks. In an alternative approach, size-dependent (proportionate) growth is modeled approximately by multiplying the size of each crystal by a factor, an operation that maintains CSD shape and variance, and which is in accord with calcite growth experiments. The latter growth law can be obtained during supply controlled growth using a modified version of the Law of Proportionate Effect (LPE), an equation that simulates the reaction path followed by a CSD shape as mean size increases.
High-Order Space-Time Methods for Conservation Laws
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2013-01-01
Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown
Crystallization Physics in Biomacromolecular Systems
NASA Technical Reports Server (NTRS)
Chernov, A. A.
2003-01-01
The crystals are built of molecules of protein, nucleic acid and their complexes, like viruses, approx. 5x10(exp 3)+ 3x10(exp 6) Da in weight and 2 + 20 nm in effective diameter. This size strongly exceeds action range of molecular forces and makes a big difference with inorganic crystals. Intermolecular contacts form patches on the biomacromolecular surface. Each patch may occupy only a small percent of the whole surface and vary from polymorph to polymorph of the same protein. Thus, under different conditions (pH, solution chemistry, temperature, any area on the macromolecular surface may form a contact. The crystal Young moduli, E approx. equals 0.1 + 0.5 GPa are more than 10 times lower than that of inorganics and the biomolecules themselves. Water within biocrystals (30-70%) is unable to flow unless typical deformation time is longer than approx. 10(exp -5)s. This explains the discrepancy between light scattering and static measurements of E. Nucleation and Growth requires typically concentrations exceeding the equilibrium ones up to 100 times - because of the new size scale results in 10 - 10(exp 3) times lower kinetic coefficients than that needed for inorganic solution growth. All phenomena observed in the latter occur with protein crystallization and are even better studied by AFM. Crystals are typically facetted. Among unexpected findings of general significance are - net molecular exchange flux at kinks is much lower than that expected from supersaturation, steps with low (< approx. 10(exp -2)) kink density at steps follow Gibbs-Thomson law only at very low supersaturations, step segment growth rate may be independent of step energy. Crystal perfection is a must of biocrystallization to achieve the major goal to find 3-D atomic structure of biomacromolecules by x-ray diffraction. Poor diffraction resolution (> 3Angstrom) makes crystallization a bottleneck for structural biology. All defects typical of small molecule crystals are found in biocrystals, but the defects responsible for poor resolution are not identified. Conformational changes are one of them. Biocrystallization in microgravity reportedly results in 20% cases of better crystals. The mechanism of how lack of convection can do this is still not clear. Lower supersaturation, self-purification &om preferentially trapped homologous impurities and step bunching are viable hypotheses.
Analysis of stability for stochastic delay integro-differential equations.
Zhang, Yu; Li, Longsuo
2018-01-01
In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.
Repliscan: a tool for classifying replication timing regions.
Zynda, Gregory J; Song, Jawon; Concia, Lorenzo; Wear, Emily E; Hanley-Bowdoin, Linda; Thompson, William F; Vaughn, Matthew W
2017-08-07
Replication timing experiments that use label incorporation and high throughput sequencing produce peaked data similar to ChIP-Seq experiments. However, the differences in experimental design, coverage density, and possible results make traditional ChIP-Seq analysis methods inappropriate for use with replication timing. To accurately detect and classify regions of replication across the genome, we present Repliscan. Repliscan robustly normalizes, automatically removes outlying and uninformative data points, and classifies Repli-seq signals into discrete combinations of replication signatures. The quality control steps and self-fitting methods make Repliscan generally applicable and more robust than previous methods that classify regions based on thresholds. Repliscan is simple and effective to use on organisms with different genome sizes. Even with analysis window sizes as small as 1 kilobase, reliable profiles can be generated with as little as 2.4x coverage.
Control Software for Piezo Stepping Actuators
NASA Technical Reports Server (NTRS)
Shields, Joel F.
2013-01-01
A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.
A new method for automated discontinuity trace mapping on rock mass 3D surface model
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Chen, Jianqin; Zhu, Hehua
2016-04-01
This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie
2016-01-01
It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.
USDA-ARS?s Scientific Manuscript database
A first step in exploring population structure in crop plants and other organisms is to define the number of subpopulations that exist for a given data set. The genetic marker data sets being generated have become increasingly large over time and commonly are the high-dimension, low sample size (HDL...
ERIC Educational Resources Information Center
Szidon, Katherine; Ruppar, Andrea; Smith, Leann
2015-01-01
Lakeview High School is a medium sized high school in a rural farming community. The staff at Lakeview meets at the beginning of each school year to discuss building-level professional development plans. This year, Lakeview's special education team has requested to focus its professional development time on improving special education services for…
Evaluation of TOPLATS on three Mediterranean catchments
NASA Astrophysics Data System (ADS)
Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel
2016-08-01
Physically based hydrological models are complex tools that provide a complete description of the different processes occurring on a catchment. The TOPMODEL-based Land-Atmosphere Transfer Scheme (TOPLATS) simulates water and energy balances at different time steps, in both lumped and distributed modes. In order to gain insight on the behavior of TOPLATS and its applicability in different conditions a detailed evaluation needs to be carried out. This study aimed to develop a complete evaluation of TOPLATS including: (1) a detailed review of previous research works using this model; (2) a sensitivity analysis (SA) of the model with two contrasted methods (Morris and Sobol) of different complexity; (3) a 4-step calibration strategy based on a multi-start Powell optimization algorithm; and (4) an analysis of the influence of simulation time step (hourly vs. daily). The model was applied on three catchments of varying size (La Tejeria, Cidacos and Arga), located in Navarre (Northern Spain), and characterized by different levels of Mediterranean climate influence. Both Morris and Sobol methods showed very similar results that identified Brooks-Corey Pore Size distribution Index (B), Bubbling pressure (ψc) and Hydraulic conductivity decay (f) as the three overall most influential parameters in TOPLATS. After calibration and validation, adequate streamflow simulations were obtained in the two wettest catchments, but the driest (Cidacos) gave poor results in validation, due to the large climatic variability between calibration and validation periods. To overcome this issue, an alternative random and discontinuous method of cal/val period selection was implemented, improving model results.
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
Ribeiro, Marcos Ausenka; Martins, Milton Arruda; Carvalho, Celso R F
2014-01-01
A four-group randomized controlled trial evaluated the impact of distinct workplace interventions to increase the physical activity (PA) and to reduce anthropometric parameters in middle-age women. One-hundred and ninety-five women age 40-50 yr who were employees from a university hospital and physically inactive at their leisure time were randomly assigned to one of four groups: minimal treatment comparator (MTC; n = 47), pedometer-based individual counseling (PedIC; n = 53), pedometer-based group counseling (PedGC; n = 48), and aerobic training (AT; n = 47). The outcomes were total number of steps (primary outcome), those performed at moderate intensity (≥ 110 steps per minute), and weight and waist circumference (secondary outcomes). Evaluations were performed at baseline, at the end of a 3-month intervention, and 3 months after that. Data were presented as delta [(after 3 months-baseline) or (after 6 months-baseline)] and 95% confidence interval. To detect the differences among the groups, a one-way ANOVA and a Holm-Sidak post hoc test was used (P < 0.05). The Cohen effect size was calculated, and an intention-to-treat approach was performed. Only groups using pedometers (PedIC and PedGC) increased the total number of steps after 3 months (P < 0.05); however, the increase observed in PedGC group (1475 steps per day) was even higher than that in PedIC (512 steps per day, P < 0.05) with larger effect size (1.4). The number of steps performed at moderate intensity also increased only in the PedGC group (845 steps per day, P < 0.05). No PA benefit was observed at 6 months. Women submitted to AT did not modify PA daily life activity but reduced anthropometric parameters after 3 and 6 months (P < 0.05). Our results show that in the workplace setting, pedometer-based PA intervention with counseling is effective increasing daily life number of steps, whereas AT is effective for weight loss.
Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H
2013-08-01
Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.
Krylov Deferred Correction Accelerated Method of Lines Transpose for Parabolic Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Jun; Jingfang, Huang
2008-01-01
In this paper, a new class of numerical methods for the accurate and efficient solutions of parabolic partial differential equations is presented. Unlike traditional method of lines (MoL), the new {\\bf \\it Krylov deferred correction (KDC) accelerated method of lines transpose (MoL^T)} first discretizes the temporal direction using Gaussian type nodes and spectral integration, and symbolically applies low-order time marching schemes to form a preconditioned elliptic system, which is then solved iteratively using Newton-Krylov techniques such as Newton-GMRES or Newton-BiCGStab method. Each function evaluation in the Newton-Krylov method is simply one low-order time-stepping approximation of the error by solving amore » decoupled system using available fast elliptic equation solvers. Preliminary numerical experiments show that the KDC accelerated MoL^T technique is unconditionally stable, can be spectrally accurate in both temporal and spatial directions, and allows optimal time-step sizes in long-time simulations.« less
Depicting Changes in Multiple Symptoms Over Time.
Muehrer, Rebecca J; Brown, Roger L; Lanuza, Dorothy M
2015-09-01
Ridit analysis, an acronym for Relative to an Identified Distribution, is a method for assessing change in ordinal data and can be used to show how individual symptoms change or remain the same over time. The purposes of this article are to (a) describe how to use ridit analysis to assess change in a symptom measure using data from a longitudinal study, (b) give a step-by-step example of ridit analysis, (c) show the clinical relevance of applying ridit analysis, and (d) display results in an innovative graphic. Mean ridit effect sizes were calculated for the frequency and distress of 64 symptoms in lung transplant patients before and after transplant. Results were displayed in a bubble graph. Ridit analysis allowed us to maintain the specificity of individual symptoms and to show how each symptom changed or remained the same over time. The bubble graph provides an efficient way for clinicians to identify changes in symptom frequency and distress over time. © The Author(s) 2014.
Process for preparation of large-particle-size monodisperse latexes
NASA Technical Reports Server (NTRS)
Vanderhoff, J. W.; Micale, F. J.; El-Aasser, M. S.; Kornfeld, D. M. (Inventor)
1981-01-01
Monodisperse latexes having a particle size in the range of 2 to 40 microns are prepared by seeded emulsion polymerization in microgravity. A reaction mixture containing smaller monodisperse latex seed particles, predetermined amounts of monomer, emulsifier, initiator, inhibitor and water is placed in a microgravity environment, and polymerization is initiated by heating. The reaction is allowed to continue until the seed particles grow to a predetermined size, and the resulting enlarged particles are then recovered. A plurality of particle-growing steps can be used to reach larger sizes within the stated range, with enlarge particles from the previous steps being used as seed particles for the succeeding steps. Microgravity enables preparation of particles in the stated size range by avoiding gravity related problems of creaming and settling, and flocculation induced by mechanical shear that have precluded their preparation in a normal gravity environment.
NASA Astrophysics Data System (ADS)
He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.
2013-12-01
Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that the optimized fw is best correlated linearly to soil water content at 5 to 10 cm depth. We also found that both the temporal scale or window size and the priori uncertainty of Vcmax (given as its standard deviation) are important in determining the seasonal trajectory of Vcmax. During the leaf expansion stage, an appropriate window size leads to reasonable estimate of Vcmax. In the summer, the fluctuation of optimized Vcmax is mainly caused by the uncertainties in Vcmax but not the window size. Our study suggests that a smooth Vcmax curve optimized from an optimal time window size is close to the reality though the RMSE of GPP at this window is not the minimum. It also suggests that for the accurate optimization of Vcmax, it is necessary to set appropriate levels of uncertainty of Vcmax in the spring and summer because the rate of leaf nitrogen concentration change is different over the season. Parameter optimizations for more sites and multi-years are in progress.
Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.
Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué
2018-02-15
We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.
van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.
2018-01-01
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620
NASA Astrophysics Data System (ADS)
Bera, Amrita Mandal; Wargulski, Dan Ralf; Unold, Thomas
2018-04-01
Hybrid organometal perovskites have been emerged as promising solar cell material and have exhibited solar cell efficiency more than 20%. Thin films of Methylammonium lead iodide CH3NH3PbI3 perovskite materials have been synthesized by two different (one step and two steps) methods and their morphological properties have been studied by scanning electron microscopy and optical microscope imaging. The morphology of the perovskite layer is one of the most important parameters which affect solar cell efficiency. The morphology of the films revealed that two steps method provides better surface coverage than the one step method. However, the grain sizes were smaller in case of two steps method. The films prepared by two steps methods on different substrates revealed that the grain size also depend on the substrate where an increase of the grain size was found from glass substrate to FTO with TiO2 blocking layer to FTO without any change in the surface coverage area. Present study reveals that an improved quality of films can be obtained by two steps method by an optimization of synthesis processes.
Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes
NASA Astrophysics Data System (ADS)
Calvo, M.; González-Pinto, S.; Montijano, J. I.
2008-09-01
Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with the code Gauss2 [S. Gonzalez-Pinto, R. Rojas-Bello, Gauss2, a Fortran 90 code for second order initial value problems,
Gamble, J; Gartside, I B; Christ, F
1993-01-01
1. We have used non-invasive mercury in a silastic strain gauge system to assess the effect of pressure step size, on the time course of the rapid volume response (RVR) to occlusion pressure. We also obtained values for hydraulic conductance (Kf), isovolumetric venous pressure (Pvi) and venous pressure (Pv) in thirty-five studies on the legs of twenty-three supine control subjects. 2. The initial rapid volume response to small (9.53 +/- 0.45 mmHg, mean +/- S.E.M.) stepped increases in venous pressure, the rapid volume response, could be described by a single exponential of time constant 15.54 +/- 1.14 s. 3. Increasing the size of the pressure step, to 49.8 +/- 1.1 mmHg, gave a larger value for the RVR time constant (mean 77.3 +/- 11.6 s). 4. We propose that the pressure-dependent difference in the duration of the rapid volume response, in these two situations, might be due to a vascular smooth muscle-based mechanism, e.g. the veni-arteriolar reflex. 5. The mean (+/- S.E.M.) values for Kf, Pvi and Pv were 4.27 +/- 0.18 (units, ml min-1 (100 g)-1 mmHg-1 x 10(-3), 21.50 +/- 0.81 (units, mmHg) and 9.11 +/- 0.94 (units, mmHg), respectively. 6. During simultaneous assessment of these parameters in arms and legs, it was found that they did not differ significantly from one another. 7. We propose that the mercury strain gauge system offers a useful, non-invasive means of studying the mechanisms governing fluid filtration in human limbs. Images Fig. 1 PMID:8229810
[Influence on microstructure of dental zirconia ceramics prepared by two-step sintering].
Jian, Chao; Li, Ning; Wu, Zhikai; Teng, Jing; Yan, Jiazhen
2013-10-01
To investigate the microstructure of dental zirconia ceramics prepared by two-step sintering. Nanostructured zirconia powder was dry compacted, cold isostatic pressed, and pre-sintered. The pre-sintered discs were cut processed into samples. Conventional sintering, single-step sintering, and two-step sintering were carried out, and density and grain size of the samples were measured. Afterward, T1 and/or T2 of two-step sintering ranges were measured. Effects on microstructure of different routes, which consisted of two-step sintering and conventional sintering were discussed. The influence of T1 and/or T2 on density and grain size were analyzed as well. The range of T1 was between 1450 degrees C and 1550 degrees C, and the range of T2 was between 1250 degrees C and 1350 degrees C. Compared with conventional sintering, finer microstructure of higher density and smaller grain could be obtained by two-step sintering. Grain growth was dependent on T1, whereas density was not much related with T1. However, density was dependent on T2, and grain size was minimally influenced. Two-step sintering could ensure a sintering body with high density and small grain, which is good for optimizing the microstructure of dental zirconia ceramics.
Optimal setups for forced-choice staircases with fixed step sizes.
García-Pérez, M A
2000-01-01
Forced-choice staircases with fixed step sizes are used in a variety of formats whose relative merits have never been studied. This paper presents a comparative study aimed at determining their optimal format. Factors included in the study were the up/down rule, the length (number of reversals), and the size of the steps. The study also addressed the issue of whether a protocol involving three staircases running for N reversals each (with a subsequent average of the estimates provided by each individual staircase) has better statistical properties than an alternative protocol involving a single staircase running for 3N reversals. In all cases the size of a step up was different from that of a step down, in the appropriate ratio determined by García-Pérez (Vision Research, 1998, 38, 1861 - 1881). The results of a simulation study indicate that a) there are no conditions in which the 1-down/1-up rule is advisable; b) different combinations of up/down rule and number of reversals appear equivalent in terms of precision and cost: c) using a single long staircase with 3N reversals is more efficient than running three staircases with N reversals each: d) to avoid bias and attain sufficient accuracy, threshold estimates should be based on at least 30 reversals: and e) to avoid excessive cost and imprecision, the size of the step up should be between 2/3 and 3/3 the (known or presumed) spread of the psychometric function. An empirical study with human subjects confirmed the major characteristics revealed by the simulations.
Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui
2018-01-01
Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589
Flowability of lignocellusic biomass powders: influence of torrefaction intensity
NASA Astrophysics Data System (ADS)
Pachón-Morales, John; Colin, Julien; Pierre, Floran; Champavert, Thibaut; Puel, François; Perré, Patrick
2017-06-01
The poor flowability of powders produced from raw lignocellulosic biomass may be an economically issue for the production of second-generation biofuels. Torrefaction is a pre-treatment step of the gasification process that improves the physical characteristics of biomass by making it more coal-like. Particularly, the loss of resilience allows a reduction of the grinding energy consumption and is likely to improve the flow behaviour of woody powders. In this study, we investigated the effect of particle size and shape distribution on flow properties (unconfined yield stress and flowability factor) of powder from raw and torrefied biomass (Picea abies). Several intensities of torrefaction were tested, and its extent was quantified by the global mass loss, chosen as synthetic indicator of torrefaction intensity (its accounts for both the temperature level and the residence time). The intensity of torrefaction shifts the particle size distribution towards smaller sizes. An effect on the circularity and aspect ratio was also observed. A strong, positive correlation was obtained between the measured flowability of biomass powders at different consolidation stresses and the intensity of heat treatment. These results confirm the interest of torrefaction as a pre-treatment step and aim to provide new knowledge on rheological properties of biomass powders.
Performance implications from sizing a VM on multi-core systems: A Data analytic application s view
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Horey, James L; Begoli, Edmon
In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less
Initial growth and topography of 4,4'-biphenyldicarboxylic acid on Cu(001)
NASA Astrophysics Data System (ADS)
Poelsema, Bene; Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J. W.
2013-03-01
We have investigated nucleation and initial growth of BDA on Cu(001) at 300 - 410K, using LEEM and μLEED. BDA condenses in a 2D supramolecular c(8 ×8) network of lying molecules. The dehydrogenated molecules form hydrogen bonds with perpendicular adjacent ones. First, the adsorbed BDA molecules form a disordered dilute phase and at a sufficiently high density, the c(8 ×8) crystalline phase nucleates. From the equilibrium densities at different temperatures we obtain the 2D phase diagram. The phase coexistence line provides a cohesive energy of 0.35 eV. LEEM allows a detailed study of nucleation and growth of BDA on Cu(001) at low supersaturation. The real time microscopic information allows a direct visualization of near-critical nuclei. At 332 K and a deposition rate of 1.4 x 10-6ML/s we find a critical nucleus size about 600 nm2. The corresponding value obtained from classic nucleation theory corresponds nicely with this direct result. We estimate the Gibbs free energy for nucleation under these conditions at 4 eV. The size fluctuations are an order of magnitude stronger than expected. At 410 K the influence of steps on the growth process becomes evident: domain growth is terminated by steps even when they are permeable for individual molecules. This leads to a novel Mullins-Sekerka type of growth instability: the growth is very fast along the steps and less fast perpendicular to the steps. The large solid angle at the advancing edge of the condensate dictates the high growth rate along the step.
Double emulsion formation through hierarchical flow-focusing microchannel
NASA Astrophysics Data System (ADS)
Azarmanesh, Milad; Farhadi, Mousa; Azizian, Pooya
2016-03-01
A microfluidic device is presented for creating double emulsions, controlling their sizes and also manipulating encapsulation processes. As a result of three immiscible liquids' interaction using dripping instability, double emulsions can be produced elegantly. Effects of dimensionless numbers are investigated which are Weber number of the inner phase (Wein), Capillary number of the inner droplet (Cain), and Capillary number of the outer droplet (Caout). They affect the formation process, inner and outer droplet size, and separation frequency. Direct numerical simulation of governing equations was done using volume of fluid method and adaptive mesh refinement technique. Two kinds of double emulsion formation, the two-step and the one-step, were simulated in which the thickness of the sheath of double emulsions can be adjusted. Altering each dimensionless number will change detachment location, outer droplet size and droplet formation period. Moreover, the decussate regime of the double-emulsion/empty-droplet is observed in low Wein. This phenomenon can be obtained by adjusting the Wein in which the maximum size of the sheath is discovered. Also, the results show that Cain has significant influence on the outer droplet size in the two-step process, while Caout affects the sheath in the one-step formation considerably.
Jo, Min Sung; Sadasivam, Karthikeyan Giri; Tawfik, Wael Z; Yang, Seung Bea; Lee, Jung Ju; Ha, Jun Seok; Moon, Young Boo; Ryu, Sang Wan; Lee, June Key
2013-01-01
n-type GaN epitaxial layers were regrown on the patterned n-type GaN substrate (PNS) with different size of silicon dioxide (SiO2) nano dots to improve the crystal quality and optical properties. PNS with SiO2 nano dots promotes epitaxial lateral overgrowth (ELOG) for defect reduction and also acts as a light scattering point. Transmission electron microscopy (TEM) analysis suggested that PNS with SiO2 nano dots have superior crystalline properties. Hall measurements indicated that incrementing values in electron mobility were clear indication of reduction in threading dislocation and it was confirmed by TEM analysis. Photoluminescence (PL) intensity was enhanced by 2.0 times and 3.1 times for 1-step and 2-step PNS, respectively.
Multi-off-grid methods in multi-step integration of ordinary differential equations
NASA Technical Reports Server (NTRS)
Beaudet, P. R.
1974-01-01
Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less
NASA Technical Reports Server (NTRS)
Bokhari, S. H.; Raza, A. D.
1984-01-01
Three methods of augmenting computer networks by adding at most one link per processor are discussed: (1) A tree of N nodes may be augmented such that the resulting graph has diameter no greater than 4log sub 2((N+2)/3)-2. Thi O(N(3)) algorithm can be applied to any spanning tree of a connected graph to reduce the diameter of that graph to O(log N); (2) Given a binary tree T and a chain C of N nodes each, C may be augmented to produce C so that T is a subgraph of C. This algorithm is O(N) and may be used to produce augmented chains or rings that have diameter no greater than 2log sub 2((N+2)/3) and are planar; (3) Any rectangular two-dimensional 4 (8) nearest neighbor array of size N = 2(k) may be augmented so that it can emulate a single step shuffle-exchange network of size N/2 in 3(t) time steps.
Vasiljevic, Milica; Cartwright, Emma; Pechey, Rachel; Hollands, Gareth J; Couturier, Dominique-Laurent; Jebb, Susan A; Marteau, Theresa M
2017-01-01
An estimated one third of energy is consumed in the workplace. The workplace is therefore an important context in which to reduce energy consumption to tackle the high rates of overweight and obesity in the general population. Altering environmental cues for food selection and consumption-physical micro-environment or 'choice architecture' interventions-has the potential to reduce energy intake. The first aim of this pilot trial is to estimate the potential impact upon energy purchased of three such environmental cues (size of portions, packages and tableware; availability of healthier vs. less healthy options; and energy labelling) in workplace cafeterias. A second aim of this pilot trial is to examine the feasibility of recruiting eligible worksites, and identify barriers to the feasibility and acceptability of implementing the interventions in preparation for a larger trial. Eighteen worksite cafeterias in England will be assigned to one of three intervention groups to assess the impact on energy purchased of altering (a) portion, package and tableware size ( n = 6); (b) availability of healthier options ( n = 6); and (c) energy (calorie) labelling ( n = 6). Using a stepped wedge design, sites will implement allocated interventions at different time periods, as randomised. This pilot trial will examine the feasibility of recruiting eligible worksites, and the feasibility and acceptability of implementing the interventions in preparation for a larger trial. In addition, a series of linear mixed models will be used to estimate the impact of each intervention on total energy (calories) purchased per time frame of analysis (daily or weekly) controlling for the total sales/transactions adjusted for calendar time and with random effects for worksite. These analyses will allow an estimate of an effect size of each of the three proposed interventions, which will form the basis of the sample size calculations necessary for a larger trial. ISRCTN52923504.
Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew
2012-10-01
A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Temperature controlled retinal photocoagulation
NASA Astrophysics Data System (ADS)
Schlott, Kerstin; Koinzer, Stefan; Baade, Alexander; Birngruber, Reginald; Roider, Johann; Brinkmann, Ralf
2013-06-01
Retinal photocoagulation lacks objective dosage in clinical use, thus the commonly applied lesions are too deep and strong, associated with pain reception and the risk of visual field defects and induction of choroidal neovascularisations. Optoacoustics allows real-time non-invasive temperature measurement in the fundus during photocoagulation by applying short probe laser pulses additionally to the treatment radiation, which excite the emission of ultrasonic waves. Due to the temperature dependence of the Grüneisen parameter, the amplitudes of the ultrasonic waves can be used to derive the temperature of the absorbing tissue. By measuring the temperatures in real-time and automatically controlling the irradiation by feedback to the treatment laser, the strength of the lesions can be defined. Different characteristic functions for the time and temperature dependent lesion sizes were used as rating curves for the treatment laser, stopping the irradiation automatically after a desired lesion size is achieved. The automatically produced lesion sizes are widely independent of the adjusted treatment laser power and individual absorption. This study was performed on anaesthetized rabbits and is a step towards a clinical trial with automatically controlled photocoagulation.
On the development of efficient algorithms for three dimensional fluid flow
NASA Technical Reports Server (NTRS)
Maccormack, R. W.
1988-01-01
The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.
Common Bolted Joint Analysis Tool
NASA Technical Reports Server (NTRS)
Imtiaz, Kauser
2011-01-01
Common Bolted Joint Analysis Tool (comBAT) is an Excel/VB-based bolted joint analysis/optimization program that lays out a systematic foundation for an inexperienced or seasoned analyst to determine fastener size, material, and assembly torque for a given design. Analysts are able to perform numerous what-if scenarios within minutes to arrive at an optimal solution. The program evaluates input design parameters, performs joint assembly checks, and steps through numerous calculations to arrive at several key margins of safety for each member in a joint. It also checks for joint gapping, provides fatigue calculations, and generates joint diagrams for a visual reference. Optimum fastener size and material, as well as correct torque, can then be provided. Analysis methodology, equations, and guidelines are provided throughout the solution sequence so that this program does not become a "black box:" for the analyst. There are built-in databases that reduce the legwork required by the analyst. Each step is clearly identified and results are provided in number format, as well as color-coded spelled-out words to draw user attention. The three key features of the software are robust technical content, innovative and user friendly I/O, and a large database. The program addresses every aspect of bolted joint analysis and proves to be an instructional tool at the same time. It saves analysis time, has intelligent messaging features, and catches operator errors in real time.
Direct and continuous synthesis of VO2 nanoparticles
NASA Astrophysics Data System (ADS)
Powell, M. J.; Marchand, P.; Denis, C. J.; Bear, J. C.; Darr, J. A.; Parkin, I. P.
2015-11-01
Monoclinic VO2 nanoparticles are of interest due to the material's thermochromic properties, however, direct synthesis routes to VO2 nanoparticles are often inaccessible due to the high synthesis temperatures or long reaction times required. Herein, we present a two-step synthesis route for the preparation of monoclinic VO2 nanoparticles using Continuous Hydrothermal Flow Synthesis (CHFS) followed by a short post heat treatment step. A range of particle sizes, dependent on synthesis conditions, were produced from 50 to 200 nm by varying reaction temperatures and the residence times in the process. The nanoparticles were characterised by powder X-ray diffraction, Raman and UV/Vis spectroscopy, transmission electron microscopy (TEM), scanning electron microscopy (SEM) and differential scanning calorimetry (DSC). The nanoparticles were highly crystalline with rod and sphere-like morphologies present in TEM micrographs, with the size of both the rod and spherical particles being highly dependent on both reaction temperature and residence time. SEM micrographs showed the surface of the powders produced from the CHFS process to be highly uniform. The samples were given a short post synthesis heat treatment to ensure that they were phase pure monoclinic VO2, which led to them exhibiting a large and reversible switch in optical properties (at near-IR wavelengths), which suggests that if such materials can be incorporated into coatings or in composites, they could be used for fenestration in architectural applications.
Capturing Pressure Oscillations in Numerical Simulations of Internal Combustion Engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gubba, Sreenivasa Rao; Jupudi, Ravichandra S.; Pasunurthi, Shyam Sundar
In an earlier publication, the authors compared numerical predictions of the mean cylinder pressure of diesel and dual-fuel combustion, to that of measured pressure data from a medium-speed, large-bore engine. In these earlier comparisons, measured data from a flush-mounted in-cylinder pressure transducer showed notable and repeatable pressure oscillations which were not evident in the mean cylinder pressure predictions from computational fluid dynamics (CFD). In this paper, the authors present a methodology for predicting and reporting the local cylinder pressure consistent with that of a measurement location. Such predictions for large-bore, medium-speed engine operation demonstrate pressure oscillations in accordance with thosemore » measured. The temporal occurrences of notable pressure oscillations were during the start of combustion and around the time of maximum cylinder pressure. With appropriate resolutions in time steps and mesh sizes, the local cell static pressure predicted for the transducer location showed oscillations in both diesel and dual-fuel combustion modes which agreed with those observed in the experimental data. Fast Fourier transform (FFT) analysis on both experimental and calculated pressure traces revealed that the CFD predictions successfully captured both the amplitude and frequency range of the oscillations. Furthermore, resolving propagating pressure waves with the smaller time steps and grid sizes necessary to achieve these results required a significant increase in computer resources.« less
Direct and continuous synthesis of VO2 nanoparticles.
Powell, M J; Marchand, P; Denis, C J; Bear, J C; Darr, J A; Parkin, I P
2015-11-28
Monoclinic VO2 nanoparticles are of interest due to the material's thermochromic properties, however, direct synthesis routes to VO2 nanoparticles are often inaccessible due to the high synthesis temperatures or long reaction times required. Herein, we present a two-step synthesis route for the preparation of monoclinic VO2 nanoparticles using Continuous Hydrothermal Flow Synthesis (CHFS) followed by a short post heat treatment step. A range of particle sizes, dependent on synthesis conditions, were produced from 50 to 200 nm by varying reaction temperatures and the residence times in the process. The nanoparticles were characterised by powder X-ray diffraction, Raman and UV/Vis spectroscopy, transmission electron microscopy (TEM), scanning electron microscopy (SEM) and differential scanning calorimetry (DSC). The nanoparticles were highly crystalline with rod and sphere-like morphologies present in TEM micrographs, with the size of both the rod and spherical particles being highly dependent on both reaction temperature and residence time. SEM micrographs showed the surface of the powders produced from the CHFS process to be highly uniform. The samples were given a short post synthesis heat treatment to ensure that they were phase pure monoclinic VO2, which led to them exhibiting a large and reversible switch in optical properties (at near-IR wavelengths), which suggests that if such materials can be incorporated into coatings or in composites, they could be used for fenestration in architectural applications.
Capturing Pressure Oscillations in Numerical Simulations of Internal Combustion Engines
Gubba, Sreenivasa Rao; Jupudi, Ravichandra S.; Pasunurthi, Shyam Sundar; ...
2018-04-09
In an earlier publication, the authors compared numerical predictions of the mean cylinder pressure of diesel and dual-fuel combustion, to that of measured pressure data from a medium-speed, large-bore engine. In these earlier comparisons, measured data from a flush-mounted in-cylinder pressure transducer showed notable and repeatable pressure oscillations which were not evident in the mean cylinder pressure predictions from computational fluid dynamics (CFD). In this paper, the authors present a methodology for predicting and reporting the local cylinder pressure consistent with that of a measurement location. Such predictions for large-bore, medium-speed engine operation demonstrate pressure oscillations in accordance with thosemore » measured. The temporal occurrences of notable pressure oscillations were during the start of combustion and around the time of maximum cylinder pressure. With appropriate resolutions in time steps and mesh sizes, the local cell static pressure predicted for the transducer location showed oscillations in both diesel and dual-fuel combustion modes which agreed with those observed in the experimental data. Fast Fourier transform (FFT) analysis on both experimental and calculated pressure traces revealed that the CFD predictions successfully captured both the amplitude and frequency range of the oscillations. Furthermore, resolving propagating pressure waves with the smaller time steps and grid sizes necessary to achieve these results required a significant increase in computer resources.« less
NASA Astrophysics Data System (ADS)
Chahrour, Khaled M.; Ahmed, Naser M.; Hashim, M. R.; Elfadill, Nezar G.; Maryam, W.; Ahmad, M. A.; Bououdina, M.
2015-12-01
Highly-ordered and hexagonal-shaped nanoporous anodic aluminum oxide (AAO) of 1 μm thickness of Al pre-deposited onto Si substrate using two-step anodization was successfully fabricated. The growth mechanism of the porous AAO film was investigated by anodization current-time behavior for different anodizing voltages and by visualizing the microstructural procedure of the fabrication of AAO film by two-step anodization using cross-sectional and top view of FESEM imaging. Optimum conditions of the process variables such as annealing time of the as-deposited Al thin film and pore widening time of porous AAO film were experimentally determined to obtain AAO films with uniformly distributed and vertically aligned porous microstructure. Pores with diameter ranging from 50 nm to 110 nm and thicknesses between 250 nm and 1400 nm, were obtained by controlling two main influential anodization parameters: the anodizing voltage and time of the second-step anodization. X-ray diffraction analysis reveals amorphous-to-crystalline phase transformation after annealing at temperatures above 800 °C. AFM images show optimum ordering of the porous AAO film anodized under low voltage condition. AAO films may be exploited as templates with desired size distribution for the fabrication of CuO nanorod arrays. Such nanostructured materials exhibit unique properties and hold high potential for nanotechnology devices.
Stepped frequency ground penetrating radar
Vadnais, Kenneth G.; Bashforth, Michael B.; Lewallen, Tricia S.; Nammath, Sharyn R.
1994-01-01
A stepped frequency ground penetrating radar system is described comprising an RF signal generating section capable of producing stepped frequency signals in spaced and equal increments of time and frequency over a preselected bandwidth which serves as a common RF signal source for both a transmit portion and a receive portion of the system. In the transmit portion of the system the signal is processed into in-phase and quadrature signals which are then amplified and then transmitted toward a target. The reflected signals from the target are then received by a receive antenna and mixed with a reference signal from the common RF signal source in a mixer whose output is then fed through a low pass filter. The DC output, after amplification and demodulation, is digitized and converted into a frequency domain signal by a Fast Fourier Transform. A plot of the frequency domain signals from all of the stepped frequencies broadcast toward and received from the target yields information concerning the range (distance) and cross section (size) of the target.
Automated detection and analysis of particle beams in laser-plasma accelerator simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela Mayumi; Geddes, C.G.; Cormier-Michel, E.
Numerical simulations of laser-plasma wakefield (particle) accelerators model the acceleration of electrons trapped in plasma oscillations (wakes) left behind when an intense laser pulse propagates through the plasma. The goal of these simulations is to better understand the process involved in plasma wake generation and how electrons are trapped and accelerated by the wake. Understanding of such accelerators, and their development, offer high accelerating gradients, potentially reducing size and cost of new accelerators. One operating regime of interest is where a trapped subset of electrons loads the wake and forms an isolated group of accelerated particles with low spread inmore » momentum and position, desirable characteristics for many applications. The electrons trapped in the wake may be accelerated to high energies, the plasma gradient in the wake reaching up to a gigaelectronvolt per centimeter. High-energy electron accelerators power intense X-ray radiation to terahertz sources, and are used in many applications including medical radiotherapy and imaging. To extract information from the simulation about the quality of the beam, a typical approach is to examine plots of the entire dataset, visually determining the adequate parameters necessary to select a subset of particles, which is then further analyzed. This procedure requires laborious examination of massive data sets over many time steps using several plots, a routine that is unfeasible for large data collections. Demand for automated analysis is growing along with the volume and size of simulations. Current 2D LWFA simulation datasets are typically between 1GB and 100GB in size, but simulations in 3D are of the order of TBs. The increase in the number of datasets and dataset sizes leads to a need for automatic routines to recognize particle patterns as particle bunches (beam of electrons) for subsequent analysis. Because of the growth in dataset size, the application of machine learning techniques for scientific data mining is increasingly considered. In plasma simulations, Bagherjeiran et al. presented a comprehensive report on applying graph-based techniques for orbit classification. They used the KAM classifier to label points and components in single and multiple orbits. Love et al. conducted an image space analysis of coherent structures in plasma simulations. They used a number of segmentation and region-growing techniques to isolate regions of interest in orbit plots. Both approaches analyzed particle accelerator data, targeting the system dynamics in terms of particle orbits. However, they did not address particle dynamics as a function of time or inspected the behavior of bunches of particles. Ruebel et al. addressed the visual analysis of massive laser wakefield acceleration (LWFA) simulation data using interactive procedures to query the data. Sophisticated visualization tools were provided to inspect the data manually. Ruebel et al. have integrated these tools to the visualization and analysis system VisIt, in addition to utilizing efficient data management based on HDF5, H5Part, and the index/query tool FastBit. In Ruebel et al. proposed automatic beam path analysis using a suite of methods to classify particles in simulation data and to analyze their temporal evolution. To enable researchers to accurately define particle beams, the method computes a set of measures based on the path of particles relative to the distance of the particles to a beam. To achieve good performance, this framework uses an analysis pipeline designed to quickly reduce the amount of data that needs to be considered in the actual path distance computation. As part of this process, region-growing methods are utilized to detect particle bunches at single time steps. Efficient data reduction is essential to enable automated analysis of large data sets as described in the next section, where data reduction methods are steered to the particular requirements of our clustering analysis. Previously, we have described the application of a set of algorithms to automate the data analysis and classification of particle beams in the LWFA simulation data, identifying locations with high density of high energy particles. These algorithms detected high density locations (nodes) in each time step, i.e. maximum points on the particle distribution for only one spatial variable. Each node was correlated to a node in previous or later time steps by linking these nodes according to a pruned minimum spanning tree (PMST). We call the PMST representation 'a lifetime diagram', which is a graphical tool to show temporal information of high dense groups of particles in the longitudinal direction for the time series. Electron bunch compactness was described by another step of the processing, designed to partition each time step, using fuzzy clustering, into a fixed number of clusters.« less
Instructional versus schedule control of humans' choices in situations of diminishing returns
Hackenberg, Timothy D.; Joker, Veronica R.
1994-01-01
Four adult humans chose repeatedly between a fixed-time schedule (of points later exchangeable for money) and a progressive-time schedule that began at 0 s and increased by a fixed number of seconds with each point delivered by that schedule. Each point delivered by the fixed-time schedule reset the requirements of the progressive-time schedule to its minimum value. Subjects were provided with instructions that specified a particular sequence of choices. Under the initial conditions, the instructions accurately specified the optimal choice sequence. Thus, control by instructions and optimal control by the programmed contingencies both supported the same performance. To distinguish the effects of instructions from schedule sensitivity, the correspondence between the instructed and optimal choice patterns was gradually altered across conditions by varying the step size of the progressive-time schedule while maintaining the same instructions. Step size was manipulated, typically in 1-s units, first in an ascending and then in a descending sequence of conditions. Instructions quickly established control in all 4 subjects but, by narrowing the range of choice patterns, they reduced subsequent sensitivity to schedule changes. Instructional control was maintained across the ascending sequence of progressive-time values for each subject, but eventually diminished, giving way to more schedule-appropriate patterns. The transition from instruction-appropriate to schedule-appropriate behavior was characterized by an increase in the variability of choice patterns and local increases in point density. On the descending sequence of progressive-time values, behavior appeared to be schedule sensitive, sometimes even optimally sensitive, but it did not always change systematically with the contingencies, suggesting the involvement of other factors. PMID:16812747
Analysis Techniques for Microwave Dosimetric Data.
1985-10-01
the number of steps in the frequency list . 0062 C ----------------------------------------------------------------------- 0063 CALL FILE2() 0064...starting frequency, 0061 C the step size, and the number of steps in the frequency list . 0062 C
NASA Astrophysics Data System (ADS)
Zhu, Minjie; Scott, Michael H.
2017-07-01
Accurate and efficient response sensitivities for fluid-structure interaction (FSI) simulations are important for assessing the uncertain response of coastal and off-shore structures to hydrodynamic loading. To compute gradients efficiently via the direct differentiation method (DDM) for the fully incompressible fluid formulation, approximations of the sensitivity equations are necessary, leading to inaccuracies of the computed gradients when the geometry of the fluid mesh changes rapidly between successive time steps or the fluid viscosity is nonzero. To maintain accuracy of the sensitivity computations, a quasi-incompressible fluid is assumed for the response analysis of FSI using the particle finite element method and DDM is applied to this formulation, resulting in linearized equations for the response sensitivity that are consistent with those used to compute the response. Both the response and the response sensitivity can be solved using the same unified fractional step method. FSI simulations show that although the response using the quasi-incompressible and incompressible fluid formulations is similar, only the quasi-incompressible approach gives accurate response sensitivity for viscous, turbulent flows regardless of time step size.
Testing electroexplosive devices by programmed pulsing techniques
NASA Technical Reports Server (NTRS)
Rosenthal, L. A.; Menichelli, V. J.
1976-01-01
A novel method for testing electroexplosive devices is proposed wherein capacitor discharge pulses, with increasing energy in a step-wise fashion, are delivered to the device under test. The size of the energy increment can be programmed so that firing takes place after many, or after only a few, steps. The testing cycle is automatically terminated upon firing. An energy-firing contour relating the energy required to the programmed step size describes the single-pulse firing energy and the possible sensitization or desensitization of the explosive device.
Lee, Sang-Jin; Jung, Choong-Hwan
2012-01-01
Nano-sized yttria (Y2O3) powders were successfully synthesized at a low temperature of 400 degrees C by a simple polymer solution route. PVA polymer, as an organic carrier, contributed to an atom-scale homogeneous precursor gel and it resulted in fully crystallized, nano-sized yttria powder with high specific surface area through the low temperature calcination. In this process, the content of PVA, calcination temperature and heating time affected the microstructure and crystallization behavior of the powders. The development of crystalline phase and the final particle size were strongly dependant on the oxidation reaction from the polymer burn-out step and the PVA content. In this paper, the PVA solution technique for the fabrication of nano-sized yttria powders is introduced. The effects of PVA content and holding time on the powder morphology and powder specific surface area are also studied. The characterization of the synthesized powders is examined by using XRD, DTA/TG, SEM, TEM and nitrogen gas adsorption. The yttria powder synthesized from the PVA content of 3:1 ratio and calcined at 400 degrees C had a crystallite size of about 20 nm or less with a high surface areas of 93.95-120.76 m2 g(-1).
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2002-01-01
A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.
Scanning tunneling microscope with a rotary piezoelectric stepping motor
NASA Astrophysics Data System (ADS)
Yakimov, V. N.
1996-02-01
A compact scanning tunneling microscope (STM) with a novel rotary piezoelectric stepping motor for coarse positioning has been developed. An inertial method for rotating of the rotor by the pair of piezoplates has been used in the piezomotor. Minimal angular step size was about several arcsec with the spindle working torque up to 1 N×cm. Design of the STM was noticeably simplified by utilization of the piezomotor with such small step size. A shaft eccentrically attached to the piezomotor spindle made it possible to push and pull back the cylindrical bush with the tubular piezoscanner. A linear step of coarse positioning was about 50 nm. STM resolution in vertical direction was better than 0.1 nm without an external vibration isolation.
Lee, Yoo-Jung; Seo, Tae Hoon; Lee, Seula; Jang, Wonhee; Kim, Myung Jong; Sung, Jung-Suk
2018-01-01
Graphene is a noncytotoxic monolayer platform with unique physical, chemical, and biological properties. It has been demonstrated that graphene substrate may provide a promising biocompatible scaffold for stem cell therapy. Because chemical vapor deposited graphene has a two dimensional polycrystalline structure, it is important to control the individual domain size to obtain desirable properties for nano-material. However, the biological effects mediated by differences in domain size of graphene have not yet been reported. On the basis of the control of graphene domain achieved by one-step growth (1step-G, small domain) and two-step growth (2step-G, large domain) process, we found that the neuronal differentiation of bone marrow-derived human mesenchymal stem cells (hMSCs) highly depended on the graphene domain size. The defects at the domain boundaries in 1step-G graphene was higher (×8.5) and had a relatively low (13% lower) contact angle of water droplet than 2step-G graphene, leading to enhanced cell-substrate adhesion and upregulated neuronal differentiation of hMSCs. We confirmed that the strong interactions between cells and defects at the domain boundaries in 1step-G graphene can be obtained due to their relatively high surface energy, which is stronger than interactions between cells and graphene surfaces. Our results may provide valuable information on the development of graphene-based scaffold by understanding which properties of graphene domain influence cell adhesion efficacy and stem cell differentiation. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 106A: 43-51, 2018. © 2017 Wiley Periodicals, Inc.
Multipinhole SPECT helical scan parameters and imaging volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang
Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less
Sedimentary Geothermal Feasibility Study: October 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustine, Chad; Zerpa, Luis
The objective of this project is to analyze the feasibility of commercial geothermal projects using numerical reservoir simulation, considering a sedimentary reservoir with low permeability that requires productivity enhancement. A commercial thermal reservoir simulator (STARS, from Computer Modeling Group, CMG) is used in this work for numerical modeling. In the first stage of this project (FY14), a hypothetical numerical reservoir model was developed, and validated against an analytical solution. The following model parameters were considered to obtain an acceptable match between the numerical and analytical solutions: grid block size, time step and reservoir areal dimensions; the latter related to boundarymore » effects on the numerical solution. Systematic model runs showed that insufficient grid sizing generates numerical dispersion that causes the numerical model to underestimate the thermal breakthrough time compared to the analytic model. As grid sizing is decreased, the model results converge on a solution. Likewise, insufficient reservoir model area introduces boundary effects in the numerical solution that cause the model results to differ from the analytical solution.« less
Faria, Eliney F; Caputo, Peter A; Wood, Christopher G; Karam, Jose A; Nogueras-González, Graciela M; Matin, Surena F
2014-02-01
Laparoscopic and robotic partial nephrectomy (LPN and RPN) are strongly related to influence of tumor complexity and learning curve. We analyzed a consecutive experience between RPN and LPN to discern if warm ischemia time (WIT) is in fact improved while accounting for these two confounding variables and if so by which particular aspect of WIT. This is a retrospective analysis of consecutive procedures performed by a single surgeon between 2002-2008 (LPN) and 2008-2012 (RPN). Specifically, individual steps, including tumor excision, suturing of intrarenal defect, and parenchyma, were recorded at the time of surgery. Multivariate and univariate analyzes were used to evaluate influence of learning curve, tumor complexity, and time kinetics of individual steps during WIT, to determine their influence in WIT. Additionally, we considered the effect of RPN on the learning curve. A total of 146 LPNs and 137 RPNs were included. Considering renal function, WIT, suturing time, renorrhaphy time were found statistically significant differences in favor of RPN (p < 0.05). In the univariate analysis, surgical procedure, learning curve, clinical tumor size, and RENAL nephrometry score were statistically significant predictors for WIT (p < 0.05). RPN decreased the WIT on average by approximately 7 min compared to LPN even when adjusting for learning curve, tumor complexity, and both together (p < 0.001). We found RPN was associated with a shorter WIT when controlling for influence of the learning curve and tumor complexity. The time required for tumor excision was not shortened but the time required for suturing steps was significantly shortened.
Mutational Effects and Population Dynamics During Viral Adaptation Challenge Current Models
Miller, Craig R.; Joyce, Paul; Wichman, Holly A.
2011-01-01
Adaptation in haploid organisms has been extensively modeled but little tested. Using a microvirid bacteriophage (ID11), we conducted serial passage adaptations at two bottleneck sizes (104 and 106), followed by fitness assays and whole-genome sequencing of 631 individual isolates. Extensive genetic variation was observed including 22 beneficial, several nearly neutral, and several deleterious mutations. In the three large bottleneck lines, up to eight different haplotypes were observed in samples of 23 genomes from the final time point. The small bottleneck lines were less diverse. The small bottleneck lines appeared to operate near the transition between isolated selective sweeps and conditions of complex dynamics (e.g., clonal interference). The large bottleneck lines exhibited extensive interference and less stochasticity, with multiple beneficial mutations establishing on a variety of backgrounds. Several leapfrog events occurred. The distribution of first-step adaptive mutations differed significantly from the distribution of second-steps, and a surprisingly large number of second-step beneficial mutations were observed on a highly fit first-step background. Furthermore, few first-step mutations appeared as second-steps and second-steps had substantially smaller selection coefficients. Collectively, the results indicate that the fitness landscape falls between the extremes of smooth and fully uncorrelated, violating the assumptions of many current mutational landscape models. PMID:21041559
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-10-01
This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-01-01
BACKGROUND This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. RESULTS An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. CONCLUSION This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:25506115
Putnick, Diane L.; Bornstein, Marc H.
2016-01-01
Measurement invariance assesses the psychometric equivalence of a construct across groups or across time. Measurement noninvariance suggests that a construct has a different structure or meaning to different groups or on different measurement occasions in the same group, and so the construct cannot be meaningfully tested or construed across groups or across time. Hence, prior to testing mean differences across groups or measurement occasions (e.g., boys and girls, pretest and posttest), or differential relations of the construct across groups, it is essential to assess the invariance of the construct. Conventions and reporting on measurement invariance are still in flux, and researchers are often left with limited understanding and inconsistent advice. Measurement invariance is tested and established in different steps. This report surveys the state of measurement invariance testing and reporting, and details the results of a literature review of studies that tested invariance. Most tests of measurement invariance include configural, metric, and scalar steps; a residual invariance step is reported for fewer tests. Alternative fit indices (AFIs) are reported as model fit criteria for the vast majority of tests; χ2 is reported as the single index in a minority of invariance tests. Reporting AFIs is associated with higher levels of achieved invariance. Partial invariance is reported for about one-third of tests. In general, sample size, number of groups compared, and model size are unrelated to the level of invariance achieved. Implications for the future of measurement invariance testing, reporting, and best practices are discussed. PMID:27942093
In vivo myosin step-size from zebrafish skeletal muscle
Ajtai, Katalin; Sun, Xiaojing; Takubo, Naoko; Wang, Yihua
2016-01-01
Muscle myosins transduce ATP free energy into actin displacement to power contraction. In vivo, myosin side chains are modified post-translationally under native conditions, potentially impacting function. Single myosin detection provides the ‘bottom-up’ myosin characterization probing basic mechanisms without ambiguities inherent to ensemble observation. Macroscopic muscle physiological experimentation provides the definitive ‘top-down’ phenotype characterizations that are the concerns in translational medicine. In vivo single myosin detection in muscle from zebrafish embryo models for human muscle fulfils ambitions for both bottom-up and top-down experimentation. A photoactivatable green fluorescent protein (GFP)-tagged myosin light chain expressed in transgenic zebrafish skeletal muscle specifically modifies the myosin lever-arm. Strychnine induces the simultaneous contraction of the bilateral tail muscles in a live embryo, causing them to be isometric while active. Highly inclined thin illumination excites the GFP tag of single lever-arms and its super-resolution orientation is measured from an active isometric muscle over a time sequence covering many transduction cycles. Consecutive frame lever-arm angular displacement converts to step-size by its product with the estimated lever-arm length. About 17% of the active myosin steps that fall between 2 and 7 nm are implicated as powerstrokes because they are beyond displacements detected from either relaxed or ATP-depleted (rigor) muscle. PMID:27249818
Muscle segmentation in time series images of Drosophila metamorphosis.
Yadav, Kuleesha; Lin, Feng; Wasser, Martin
2015-01-01
In order to study genes associated with muscular disorders, we characterize the phenotypic changes in Drosophila muscle cells during metamorphosis caused by genetic perturbations. We collect in vivo images of muscle fibers during remodeling of larval to adult muscles. In this paper, we focus on the new image processing pipeline designed to quantify the changes in shape and size of muscles. We propose a new two-step approach to muscle segmentation in time series images. First, we implement a watershed algorithm to divide the image into edge-preserving regions, and then, we classify these regions into muscle and non-muscle classes on the basis of shape and intensity. The advantage of our method is two-fold: First, better results are obtained because classification of regions is constrained by the shape of muscle cell from previous time point; and secondly, minimal user intervention results in faster processing time. The segmentation results are used to compare the changes in cell size between controls and reduction of the autophagy related gene Atg 9 during Drosophila metamorphosis.
NASA Astrophysics Data System (ADS)
Wolfram, Markus; König, Stephan; Bandelow, Steffi; Fischer, Paul; Jankowski, Alexander; Marx, Gerrit; Schweikhard, Lutz
2018-02-01
Lead clusters {{{{Pb}}}{n}}+/- in the size range between about n = 15 and 40 have recently shown to exhibit complex dissociation spectra due to sequential and competing decays. In order to disentangle the pathways the exemplary {{{{Pb}}}31}+ clusters have been stored and size selected in a Penning trap and irradiated by nanosecond laser pulses. We present time-resolved measurements at time scales from several tens of microseconds to several hundreds of milliseconds. The study results in strong evidence that {{{{Pb}}}31}+ decays not only by neutral monomer evaporation but also by neutral heptamers breaking off. In addition, the decays are further followed to smaller products. The corresponding decay and growth times show that {{{{Pb}}}30}+ also dissociates by either monomer evaporation or heptamer break-off. Furthermore, the product {{{{Pb}}}17}+ may well be a result of heptamer break-off from {{{{Pb}}}24}+—as the second step of a sequential heptamer decay.
NASA Technical Reports Server (NTRS)
Joyce, A. T.
1978-01-01
Procedures for gathering ground truth information for a supervised approach to a computer-implemented land cover classification of LANDSAT acquired multispectral scanner data are provided in a step by step manner. Criteria for determining size, number, uniformity, and predominant land cover of training sample sites are established. Suggestions are made for the organization and orientation of field team personnel, the procedures used in the field, and the format of the forms to be used. Estimates are made of the probable expenditures in time and costs. Examples of ground truth forms and definitions and criteria of major land cover categories are provided in appendixes.
Data Processing for Atmospheric Phase Interferometers
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.; Nessel, James A.; Morabito, David D.
2009-01-01
This paper presents a detailed discussion of calibration procedures used to analyze data recorded from a two-element atmospheric phase interferometer (API) deployed at Goldstone, California. In addition, we describe the data products derived from those measurements that can be used for site intercomparison and atmospheric modeling. Simulated data is used to demonstrate the effectiveness of the proposed algorithm and as a means for validating our procedure. A study of the effect of block size filtering is presented to justify our process for isolating atmospheric fluctuation phenomena from other system-induced effects (e.g., satellite motion, thermal drift). A simulated 24 hr interferometer phase data time series is analyzed to illustrate the step-by-step calibration procedure and desired data products.
Solvable continuous-time random walk model of the motion of tracer particles through porous media.
Fouxon, Itzhak; Holzner, Markus
2016-08-01
We consider the continuous-time random walk (CTRW) model of tracer motion in porous medium flows based on the experimentally determined distributions of pore velocity and pore size reported by Holzner et al. [M. Holzner et al., Phys. Rev. E 92, 013015 (2015)PLEEE81539-375510.1103/PhysRevE.92.013015]. The particle's passing through one channel is modeled as one step of the walk. The step (channel) length is random and the walker's velocity at consecutive steps of the walk is conserved with finite probability, mimicking that at the turning point there could be no abrupt change of velocity. We provide the Laplace transform of the characteristic function of the walker's position and reductions for different cases of independence of the CTRW's step duration τ, length l, and velocity v. We solve our model with independent l and v. The model incorporates different forms of the tail of the probability density of small velocities that vary with the model parameter α. Depending on that parameter, all types of anomalous diffusion can hold, from super- to subdiffusion. In a finite interval of α, ballistic behavior with logarithmic corrections holds, which was observed in a previously introduced CTRW model with independent l and τ. Universality of tracer diffusion in the porous medium is considered.
Controlling the anodizing conditions in preparation of an nanoporous anodic aluminium oxide template
NASA Astrophysics Data System (ADS)
Nazemi, Azadeh; Abolfazl, Seyed; Sadjadi, Seyed
2014-12-01
Porous anodic aluminium oxide (AAO) template is commonly used in the synthesis of one-dimensional nanostructures, such as nanowires and nanorods, due to its simple fabrication process. Controlling the anodizing conditions is important because of their direct influence on the size of AAO template pores; it affects the size of nanostructures that are fabricated in AAO template. In present study, several alumina templates were fabricated by a two-step electrochemical anodization in different conditions, such as the time of first process, its voltage, and electrolyte concentration. The effect of these factors on pore diameters of AAO templates was investigated using scanning electron microscopy (SEM).
Simulating galactic dust grain evolution on a moving mesh
NASA Astrophysics Data System (ADS)
McKinnon, Ryan; Vogelsberger, Mark; Torrey, Paul; Marinacci, Federico; Kannan, Rahul
2018-05-01
Interstellar dust is an important component of the galactic ecosystem, playing a key role in multiple galaxy formation processes. We present a novel numerical framework for the dynamics and size evolution of dust grains implemented in the moving-mesh hydrodynamics code AREPO suited for cosmological galaxy formation simulations. We employ a particle-based method for dust subject to dynamical forces including drag and gravity. The drag force is implemented using a second-order semi-implicit integrator and validated using several dust-hydrodynamical test problems. Each dust particle has a grain size distribution, describing the local abundance of grains of different sizes. The grain size distribution is discretised with a second-order piecewise linear method and evolves in time according to various dust physical processes, including accretion, sputtering, shattering, and coagulation. We present a novel scheme for stochastically forming dust during stellar evolution and new methods for sub-cycling of dust physics time-steps. Using this model, we simulate an isolated disc galaxy to study the impact of dust physical processes that shape the interstellar grain size distribution. We demonstrate, for example, how dust shattering shifts the grain size distribution to smaller sizes resulting in a significant rise of radiation extinction from optical to near-ultraviolet wavelengths. Our framework for simulating dust and gas mixtures can readily be extended to account for other dynamical processes relevant in galaxy formation, like magnetohydrodynamics, radiation pressure, and thermo-chemical processes.
Convergance experiments with a hydrodynamic model of Port Royal Sound, South Carolina
Lee, J.K.; Schaffranek, R.W.; Baltzer, R.A.
1989-01-01
A two-demensional, depth-averaged, finite-difference, flow/transport model, SIM2D, is being used to simulate tidal circulation and transport in the Port Royal Sound, South Carolina, estuarine system. Models of a subregion of the Port Royal Sound system have been derived from an earlier-developed model of the entire system having a grid size of 600 ft. The submodels were implemented with grid sizes of 600, 300, and 150 ft in order to determine the effects of changes in grid size on computed flows in the subregion, which is characterized by narrow channels and extensive tidal flats that flood and dewater with each rise and fall of the tide. Tidal amplitudes changes less than 5 percent as the grid size was decreased. Simulations were performed with the 300-foot submodel for time steps of 60, 30, and 15 s. Study results are discussed.
Damianos, Konstantina; Ferrando, Riccardo
2012-02-21
The structural modifications of small supported gold clusters caused by realistic surface defects (steps) in the MgO(001) support are investigated by computational methods. The most stable gold cluster structures on a stepped MgO(001) surface are searched for in the size range up to 24 Au atoms, and locally optimized by density-functional calculations. Several structural motifs are found within energy differences of 1 eV: inclined leaflets, arched leaflets, pyramidal hollow cages and compact structures. We show that the interaction with the step clearly modifies the structures with respect to adsorption on the flat defect-free surface. We find that leaflet structures clearly dominate for smaller sizes. These leaflets are either inclined and quasi-horizontal, or arched, at variance with the case of the flat surface in which vertical leaflets prevail. With increasing cluster size pyramidal hollow cages begin to compete against leaflet structures. Cage structures become more and more favourable as size increases. The only exception is size 20, at which the tetrahedron is found as the most stable isomer. This tetrahedron is however quite distorted. The comparison of two different exchange-correlation functionals (Perdew-Burke-Ernzerhof and local density approximation) show the same qualitative trends. This journal is © The Royal Society of Chemistry 2012
Mouse Liver Mitochondria Isolation, Size Fractionation, and Real-time MOMP Measurement.
Renault, Thibaud T; Luna-Vargas, Mark P A; Chipuk, Jerry E
2016-08-05
The mitochondrial pathway of apoptosis involves a complex interplay between dozens of proteins and lipids, and is also dependent on the shape and size of mitochondria. The use of cellular models in past studies has not been ideal for investigating how the complex multi-factor interplay regulates the molecular mechanisms of mitochondrial outer membrane permeabilization (MOMP). Isolated systems have proven to be a paradigm to deconstruct MOMP into individual steps and to study the behavior of each subset of MOMP regulators. In particular, isolated mitochondria are key to in vitro studies of the BCL-2 family proteins, a complex family of pro-survival and pro-apoptotic proteins that directly control the mitochondrial pathway of apoptosis (Renault et al ., 2013). In this protocol, we describe three complementary procedures for investigating in real-time the effects of MOMP regulators using isolated mitochondria. The first procedure is "Liver mitochondria isolation" in which the liver is dissected from mice to obtain mitochondria. "Mitochondria labeling with JC-1 and size fractionation" is the second procedure that describes a method to label, fractionate by size and standardize subpopulations of mitochondria. Finally, the "Real-time MOMP measurements" protocol allows to follow MOMP in real-time on isolated mitochondria. The aforementioned procedures were used to determine in vitro the role of mitochondrial membrane shape at the level of isolated cells and isolated mitochondria (Renault et al ., 2015).
Mouse Liver Mitochondria Isolation, Size Fractionation, and Real-time MOMP Measurement
Renault, Thibaud T.; Luna-Vargas, Mark P.A.; Chipuk, Jerry E.
2016-01-01
The mitochondrial pathway of apoptosis involves a complex interplay between dozens of proteins and lipids, and is also dependent on the shape and size of mitochondria. The use of cellular models in past studies has not been ideal for investigating how the complex multi-factor interplay regulates the molecular mechanisms of mitochondrial outer membrane permeabilization (MOMP). Isolated systems have proven to be a paradigm to deconstruct MOMP into individual steps and to study the behavior of each subset of MOMP regulators. In particular, isolated mitochondria are key to in vitro studies of the BCL-2 family proteins, a complex family of pro-survival and pro-apoptotic proteins that directly control the mitochondrial pathway of apoptosis (Renault et al., 2013). In this protocol, we describe three complementary procedures for investigating in real-time the effects of MOMP regulators using isolated mitochondria. The first procedure is “Liver mitochondria isolation” in which the liver is dissected from mice to obtain mitochondria. “Mitochondria labeling with JC-1 and size fractionation” is the second procedure that describes a method to label, fractionate by size and standardize subpopulations of mitochondria. Finally, the “Real-time MOMP measurements” protocol allows to follow MOMP in real-time on isolated mitochondria. The aforementioned procedures were used to determine in vitro the role of mitochondrial membrane shape at the level of isolated cells and isolated mitochondria (Renault et al., 2015). PMID:28093578
NASA Astrophysics Data System (ADS)
Subara, Deni; Jaswir, Irwandi; Alkhatib, Maan Fahmi Rashid; Noorbatcha, Ibrahim Ali
2018-01-01
The aim of this experiment is to screen and to understand the process variables on the fabrication of fish gelatin nanoparticles by using quality-design approach. The most influencing process variables were screened by using Plackett-Burman design. Mean particles size, size distribution, and zeta potential were found in the range 240±9.76 nm, 0.3, and -9 mV, respectively. Statistical results explained that concentration of acetone, pH of solution during precipitation step and volume of cross linker had a most significant effect on particles size of fish gelatin nanoparticles. It was found that, time and chemical consuming is lower than previous research. This study revealed the potential of quality-by design in understanding the effects of process variables on the fish gelatin nanoparticles production.
Melting behavior of nanometer sized gold isomers
NASA Astrophysics Data System (ADS)
Liu, H. B.; Ascencio, J. A.; Perez-Alvarez, M.; Yacaman, M. J.
2001-09-01
In the present work, the melting behavior of nanometer sized gold isomers was studied using a tight-binding potential with a second momentum approximation. The cases of cuboctahedra, icosahedra, Bagley decahedra, Marks decahedra and star-like decahedra were considered. We calculated the temperature dependence of the total energy and volume during melting and the melting point for different types and sizes of clusters. In addition, the structural evolutions of the nanosized clusters during the melting transition were monitored and revealed. It is found that the melting process has three characteristic time periods for the intermediate nanosized clusters. The whole process includes surface disordering and reordering, followed by surface melting and a final rapid overall melting. This is a new observation, which it is in contrast with previous reports where surface melting is the dominant step.
Rock sampling. [method for controlling particle size distribution
NASA Technical Reports Server (NTRS)
Blum, P. (Inventor)
1971-01-01
A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.
On coupling fluid plasma and kinetic neutral physics models
Joseph, I.; Rensink, M. E.; Stotler, D. P.; ...
2017-03-01
The coupled fluid plasma and kinetic neutral physics equations are analyzed through theory and simulation of benchmark cases. It is shown that coupling methods that do not treat the coupling rates implicitly are restricted to short time steps for stability. Fast charge exchange, ionization and recombination coupling rates exist, even after constraining the solution by requiring that the neutrals are at equilibrium. For explicit coupling, the present implementation of Monte Carlo correlated sampling techniques does not allow for complete convergence in slab geometry. For the benchmark case, residuals decay with particle number and increase with grid size, indicating that theymore » scale in a manner that is similar to the theoretical prediction for nonlinear bias error. Progress is reported on implementation of a fully implicit Jacobian-free Newton–Krylov coupling scheme. The present block Jacobi preconditioning method is still sensitive to time step and methods that better precondition the coupled system are under investigation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melius, C; Robertson, A; Hullinger, P
2006-10-24
The epidemiological and economic modeling of livestock diseases requires knowing the size, location, and operational type of each livestock facility within the US. At the present time, the only national database of livestock facilities that is available to the general public is the USDA's 2002 Agricultural Census data, published by the National Agricultural Statistics Service, herein referred to as the 'NASS data.' The NASS data provides facility data at the county level for various livestock types (i.e., beef cows, milk cows, cattle on feed, other cattle, total hogs and pigs, sheep and lambs, milk goats, and angora goats). However, themore » number and sizes of facilities for the various livestock types are not independent since some facilities have more than one type of livestock, and some livestock are of more than one type (e.g., 'other cattle' that are being fed for slaughter are also 'cattle on feed'). In addition, any data tabulated by NASS that could identify numbers of animals or other data reported by an individual respondent is suppressed by NASS and coded with a 'D.'. To be useful for epidemiological and economic modeling, the NASS data must be converted into a unique set of facility types (farms having similar operational characteristics). The unique set must not double count facilities or animals. At the same time, it must account for all the animals, including those for which the data has been suppressed. Therefore, several data processing steps are required to work back from the published NASS data to obtain a consistent database for individual livestock operations. This technical report documents data processing steps that were used to convert the NASS data into a national livestock facility database with twenty-eight facility types. The process involves two major steps. The first step defines the rules used to estimate the data that is suppressed within the NASS database. The second step converts the NASS livestock types into the operational facility types used by the epidemiological and economic model. Comparison of the resulting database with an independent survey of farms in central California shows excellent agreement between the numbers of farms for the various facility types. This suggests that the NASS data are well suited for providing a consistent set of county-level information on facility numbers and sizes that can be used in epidemiological and economic models.« less
SU-F-T-465: Two Years of Radiotherapy Treatments Analyzed Through MLC Log Files
DOE Office of Scientific and Technical Information (OSTI.GOV)
Defoor, D; Kabat, C; Papanikolaou, N
Purpose: To present treatment statistics of a Varian Novalis Tx using more than 90,000 Varian Dynalog files collected over the past 2 years. Methods: Varian Dynalog files are recorded for every patient treated on our Varian Novalis Tx. The files are collected and analyzed daily to check interfraction agreement of treatment deliveries. This is accomplished by creating fluence maps from the data contained in the Dynalog files. From the Dynalog files we have also compiled statistics for treatment delivery times, MLC errors, gantry errors and collimator errors. Results: The mean treatment time for VMAT patients was 153 ± 86 secondsmore » while the mean treatment time for step & shoot was 256 ± 149 seconds. Patient’s treatment times showed a variation of 0.4% over there treatment course for VMAT and 0.5% for step & shoot. The average field sizes were 40 cm2 and 26 cm2 for VMAT and step & shoot respectively. VMAT beams contained and average overall leaf travel of 34.17 meters and step & shoot beams averaged less than half of that at 15.93 meters. When comparing planned and delivered fluence maps generated using the Dynalog files VMAT plans showed an average gamma passing percentage of 99.85 ± 0.47. Step & shoot plans showed an average gamma passing percentage of 97.04 ± 0.04. 5.3% of beams contained an MLC error greater than 1 mm and 2.4% had an error greater than 2mm. The mean gantry speed for VMAT plans was 1.01 degrees/s with a maximum of 6.5 degrees/s. Conclusion: Varian Dynalog files are useful for monitoring machine performance treatment parameters. The Dynalog files have shown that the performance of the Novalis Tx is consistent over the course of a patients treatment with only slight variations in patient treatment times and a low rate of MLC errors.« less
Unified gas-kinetic scheme with multigrid convergence for rarefied flow study
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2017-09-01
The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.
A 2D Model of Hydraulic Fracturing, Damage and Microseismicity
NASA Astrophysics Data System (ADS)
Wangen, Magnus
2018-03-01
We present a model for hydraulic fracturing and damage of low-permeable rock. It computes the intermittent propagation of rock damage, microseismic event locations, microseismic frequency-magnitude distributions, stimulated rock volume and the injection pressure. The model uses a regular 2D grid and is based on ideas from invasion percolation. All damaged and connected cells during a time step constitute a microseismic event, where the size of the event is the number of cells in the cluster. The magnitude of the event is the log _{10} of the event size. The model produces events with a magnitude-frequency distribution having a b value that is approximately 0.8. The model is studied with respect to the physical parameters: permeability of damaged rock and the rock strength. "High" permeabilities of the damaged rock give the same b value ≈ 0.8, but "moderate" permeabilities give higher b values. Another difference is that "high" permeabilities produce a percolation-like fracture network, while "moderate" permeabilities result in damage zones that expand circularly away from the injection point. In the latter case of "moderate" permeabilities, the injection pressure increases substantially beyond the fracturing level. The rock strength and the time step do not change the observed b value of the model for moderate changes.
Milling assisted synthesis of calcium zirconate СаZrО3
NASA Astrophysics Data System (ADS)
Kalinkin, A. M.; Nevedomskii, V. N.; Kalinkina, E. V.; Balyakin, K. V.
2014-08-01
Monophase calcium zirconate (CaZrO3) has been prepared from the equimolar ZrO2 + CaCO3 mixture by two-step synthesis process. In the first step, mechanical treatment of the mixture is performed in an AGO-2 planetary ball mill. In the second step, the milled mixture is annealed to form calcium zirconate. High-energy ball milling of the (ZrO2+CaCO3) mixture results in decrease in the temperature of CaZrO3 formation during annealing at 950 °C. The enhancement of CaZrO3 synthesis is due to accumulation of excess energy by the reagents, decreasing the particle size and notable increase in the interphase area because of “smearing” of CaCO3 on ZrO2 particles during milling. Nanocrystalline calcium zirconate has been produced by controlling the annealing temperature and time.
Bezodis, Ian N; Kerwin, David G; Cooper, Stephen-Mark; Salo, Aki I T
2017-11-15
To understand how training periodization influences sprint performance and key step characteristics over an extended training period in an elite sprint training group. Four sprinters were studied during five months of training. Step velocities, step lengths and step frequencies were measured from video of the maximum velocity phase of training sprints. Bootstrapped mean values were calculated for each athlete for each session and 139 within-athlete, between-session comparisons were made with a repeated measures ANOVA. As training progressed, a link in the changes in velocity and step frequency was maintained. There were 71 between-session comparisons with a change in step velocity yielding at least a large effect size (>1.2), of which 73% had a correspondingly large change in step frequency in the same direction. Within-athlete mean session step length remained relatively constant throughout. Reductions in step velocity and frequency occurred during training phases of high volume lifting and running, with subsequent increases in step velocity and frequency happening during phases of low volume lifting and high intensity sprint work. The importance of step frequency over step length to the changes in performance within a training year was clearly evident for the sprinters studied. Understanding the magnitudes and timings of these changes in relation to the training program is important for coaches and athletes. The underpinning neuro-muscular mechanisms require further investigation, but are likely explained by an increase in force producing capability followed by an increase in the ability to produce that force rapidly.
Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
2017-10-01
Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melius, C
2007-12-05
The epidemiological and economic modeling of poultry diseases requires knowing the size, location, and operational type of each poultry type operation within the US. At the present time, the only national database of poultry operations that is available to the general public is the USDA's 2002 Agricultural Census data, published by the National Agricultural Statistics Service, herein referred to as the 'NASS data'. The NASS data provides census data at the county level on poultry operations for various operation types (i.e., layers, broilers, turkeys, ducks, geese). However, the number of farms and sizes of farms for the various types aremore » not independent since some facilities have more than one type of operation. Furthermore, some data on the number of birds represents the number sold, which does not represent the number of birds present at any given time. In addition, any data tabulated by NASS that could identify numbers of birds or other data reported by an individual respondent is suppressed by NASS and coded with a 'D'. To be useful for epidemiological and economic modeling, the NASS data must be converted into a unique set of facility types (farms having similar operational characteristics). The unique set must not double count facilities or birds. At the same time, it must account for all the birds, including those for which the data has been suppressed. Therefore, several data processing steps are required to work back from the published NASS data to obtain a consistent database for individual poultry operations. This technical report documents data processing steps that were used to convert the NASS data into a national poultry facility database with twenty-six facility types (7 egg-laying, 6 broiler, 1 backyard, 3 turkey, and 9 others, representing ducks, geese, ostriches, emus, pigeons, pheasants, quail, game fowl breeders and 'other'). The process involves two major steps. The first step defines the rules used to estimate the data that is suppressed within the NASS database. The first step is similar to the first step used to estimate suppressed data for livestock [Melius et al (2006)]. The second step converts the NASS poultry types into the operational facility types used by the epidemiological and economic model. We also define two additional facility types for high and low risk poultry backyards, and an additional two facility types for live bird markets and swap meets. The distribution of these additional facility types among counties is based on US population census data. The algorithm defining the number of premises and the corresponding distribution among counties and the resulting premises density plots for the continental US are provided.« less
Two-step growth of two-dimensional WSe 2/MoSe 2 heterostructures
Gong, Yongji; Lei, Sidong; Lou, Jun; ...
2015-08-03
Two dimensional (2D) materials have attracted great attention due to their unique properties and atomic thickness. Although various 2D materials have been successfully synthesized with different optical and electrical properties, a strategy for fabricating 2D heterostructures must be developed in order to construct more complicated devices for practical applications. Here we demonstrate for the first time a two-step chemical vapor deposition (CVD) method for growing transition-metal dichalcogenide (TMD) heterostructures, where MoSe 2 was synthesized first and followed by an epitaxial growth of WSe 2 on the edge and on the top surface of MoSe 2. Compared to previously reported one-stepmore » growth methods, this two-step growth has the capability of spatial and size control of each 2D component, leading to much larger (up to 169 μm) heterostructure size, and cross-contamination can be effectively minimized. Furthermore, this two-step growth produces well-defined 2H and 3R stacking in the WSe 2/MoSe 2 bilayer regions and much sharper in-plane interfaces than the previously reported MoSe 2/WSe 2 heterojunctions obtained from one-step growth methods. The resultant heterostructures with WSe 2/MoSe 2 bilayer and the exposed MoSe 2 monolayer display rectification characteristics of a p-n junction, as revealed by optoelectronic tests, and an internal quantum efficiency of 91% when functioning as a photodetector. As a result, a photovoltaic effect without any external gates was observed, showing incident photon to converted electron (IPCE) efficiencies of approximately 0.12%, providing application potential in electronics and energy harvesting.« less
Improving the Elevated-Temperature Properties by Two-Step Heat Treatments in Al-Mn-Mg 3004 Alloys
NASA Astrophysics Data System (ADS)
Liu, K.; Ma, H.; Chen, X. Grant
2018-05-01
In the present work, two-step heat treatments with preheating at different temperatures (175 °C, 250 °C, and 330 °C) as the first step followed by the peak precipitation treatment (375 °C/48 h) as the second step were performed in Al-Mn-Mg 3004 alloys to study their effects on the formation of dispersoids and the evolution of the elevated-temperature strength and creep resistance. During the two-step heat treatments, the microhardness is gradually increased with increasing time to a plateau after 24 hours when first treated at 250 °C and 330 °C, while there is a minor decrease with time when first treated at 175 °C. Results show that both the yield strength (YS) and creep resistance at 300 °C reach the peak values after the two-step treatment of 250 °C/24 h + 375 °C/48 h. The formation of dispersoids is greatly related to the type and size of pre-existing Mg2Si precipitated during the preheating treatments. It was found that coarse rodlike β ' -Mg2Si strongly promotes the nucleation of dispersoids, while fine needle like β ″-Mg2Si has less influence. Under optimized two-step heat treatment and modified alloying elements, the YS at 300 °C can reach as high as 97 MPa with the minimum creep rate of 2.2 × 10-9 s-1 at 300 °C in Al-Mn-Mg 3004 alloys, enabling them as one of the most promising candidates in lightweight aluminum alloys for elevated-temperature applications.
Dondzila, Christopher J; Swartz, Ann M; Keenan, Kevin G; Harley, Amy E; Azen, Razia; Strath, Scott J
2016-12-01
The purpose of this study is to investigate whether an in-home, individually tailored intervention is efficacious in promoting increases in physical activity (PA) and improvements in physical functioning (PF) in low-active older adults. Participants were randomized to two groups for the 8-week intervention. The enhanced physical activity (EPA) group received individualized exercise programming, including personalized step goals and a resistance band training program, and the standard of care (SoC) group received a general activity goal. Pre- and post-intervention PF measures included choice step reaction time, knee extension/flexion strength, hand grip strength, and 8 ft up and go test completion time. Thirty-nine subjects completed this study (74.6 ± 6.4 years). Significant increases in steps/day were observed for both the EPA and SoC groups, although the improvements in the EPA group were significantly higher when including only those who adhered to weekly step goals. Both groups experienced significant PF improvements, albeit greater in the EPA group for the 8 ft up and go test and knee extension strength. A low cost, in-home intervention elicited improvements in both PA and PF. Future research is warranted to expand upon the size and scope of this study, exploring dose thresholds (and time frames) for PA to improve PF and strategies to further bolster adherence rates to maximize intervention benefits.
NASA Astrophysics Data System (ADS)
Tódor, István Sz.; Szabó, László; Marişca, Oana T.; Chiş, Vasile; Leopold, Nicolae
2014-12-01
Colloidal nanoparticle assemblies (NPAs) were obtained in a one-step procedure, by reduction of HAuCl4 by hydroxylamine hydrochloride, at room temperature, without the use of any additional nucleating agent. By changing the order of the reactants, NPAs with mean size of 20 and 120 nm were obtained. Because of their size and irregular popcorn like shape, the larger size NPAs show absorption in the NIR spectral region. The building blocks of the resulted nanoassemblies are spherical nanoparticles with diameters of 4-8 and 10-30 nm, respectively. Moreover, by stabilizing the colloid with bovine serum albumin at different time moments after synthesis, NPAs of controlled size between 20 and 120 nm, could be obtained. The NPAs were characterized using UV-Vis spectroscopy, TEM and SEM electron microscopies. In addition, the possibility of using the here proposed NPAs as surface-enhanced Raman scattering (SERS) substrate was assessed and found to provide a higher enhancement compared to conventional citrate-reduced nanoparticles.
Temperature dependence of the size distribution function of InAs quantum dots on GaAs(001)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arciprete, F.; Fanfoni, M.; Patella, F.
2010-04-15
We present a detailed atomic-force-microscopy study of the effect of annealing on InAs/GaAs(001) quantum dots grown by molecular-beam epitaxy. Samples were grown at a low growth rate at 500 deg. C with an InAs coverage slightly greater than critical thickness and subsequently annealed at several temperatures. We find that immediately quenched samples exhibit a bimodal size distribution with a high density of small dots (<50 nm{sup 3}) while annealing at temperatures greater than 420 deg. C leads to a unimodal size distribution. This result indicates a coarsening process governing the evolution of the island size distribution function which is limitedmore » by the attachment-detachment of the adatoms at the island boundary. At higher temperatures one cannot ascribe a single rate-determining step for coarsening because of the increased role of adatom diffusion. However, for long annealing times at 500 deg. C the island size distribution is strongly affected by In desorption.« less
Influence of the size reduction of organic waste on their anaerobic digestion.
Palmowski, L M; Müller, J A
2000-01-01
The rate-limiting step in anaerobic digestion of organic solid waste is generally their hydrolysis. A size reduction of the particles and the resulting enlargement of the available specific surface can support the biological process in two ways. Firstly, in case of substrates with a high content of fibres and a low xegradability, their comminution yields to an improved digester gas production. This leads to a decreased amount of residues to be disposed of and to an increased quantity of useful digester gas. The second effect of the particle size reduction observed with all the substrates but particularly with those of low degradability is a reduction of the technical digestion time. Furthermore, the particle size of organic waste has an influence on the dewaterability after codigestion with sewage sludge. The presence of organic waste residues improves the dewaterability measured as specific resistance to filtration but this positive effect is attenuated if the particle size of the solids is reduced.
Beamline 10.3.2 at ALS: a hard X-ray microprobe for environmental and materials sciences.
Marcus, Matthew A; MacDowell, Alastair A; Celestre, Richard; Manceau, Alain; Miller, Tom; Padmore, Howard A; Sublett, Robert E
2004-05-01
Beamline 10.3.2 at the ALS is a bend-magnet line designed mostly for work on environmental problems involving heavy-metal speciation and location. It offers a unique combination of X-ray fluorescence mapping, X-ray microspectroscopy and micro-X-ray diffraction. The optics allow the user to trade spot size for flux in a size range of 5-17 microm in an energy range of 3-17 keV. The focusing uses a Kirkpatrick-Baez mirror pair to image a variable-size virtual source onto the sample. Thus, the user can reduce the effective size of the source, thereby reducing the spot size on the sample, at the cost of flux. This decoupling from the actual source also allows for some independence from source motion. The X-ray fluorescence mapping is performed with a continuously scanning stage which avoids the time overhead incurred by step-and-repeat mapping schemes. The special features of this beamline are described, and some scientific results shown.
Surface treated carbon catalysts produced from waste tires for fatty acids to biofuel conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hood, Zachary D.; Adhikari, Shiba P.; Wright, Marcus W.
A method of making solid acid catalysts includes the step of sulfonating waste tire pieces in a first sulfonation step. The sulfonated waste tire pieces are pyrolyzed to produce carbon composite pieces having a pore size less than 10 nm. The carbon composite pieces are then ground to produce carbon composite powders having a size less than 50 .mu.m. The carbon composite particles are sulfonated in a second sulfonation step to produce sulfonated solid acid catalysts. A method of making biofuels and solid acid catalysts are also disclosed.
Method and apparatus for sizing and separating warp yarns using acoustical energy
Sheen, Shuh-Haw; Chien, Hual-Te; Raptis, Apostolos C.; Kupperman, David S.
1998-01-01
A slashing process for preparing warp yarns for weaving operations including the steps of sizing and/or desizing the yarns in an acoustic resonance box and separating the yarns with a leasing apparatus comprised of a set of acoustically agitated lease rods. The sizing step includes immersing the yarns in a size solution contained in an acoustic resonance box. Acoustic transducers are positioned against the exterior of the box for generating an acoustic pressure field within the size solution. Ultrasonic waves that result from the acoustic pressure field continuously agitate the size solution to effect greater mixing and more uniform application and penetration of the size onto the yarns. The sized yarns are then separated by passing the warp yarns over and under lease rods. Electroacoustic transducers generate acoustic waves along the longitudinal axis of the lease rods, creating a shearing motion on the surface of the rods for splitting the yarns.
Langevin dynamics in inhomogeneous media: Re-examining the Itô-Stratonovich dilemma
NASA Astrophysics Data System (ADS)
Farago, Oded; Grønbech-Jensen, Niels
2014-01-01
The diffusive dynamics of a particle in a medium with space-dependent friction coefficient is studied within the framework of the inertial Langevin equation. In this description, the ambiguous interpretation of the stochastic integral, known as the Itô-Stratonovich dilemma, is avoided since all interpretations converge to the same solution in the limit of small time steps. We use a newly developed method for Langevin simulations to measure the probability distribution of a particle diffusing in a flat potential. Our results reveal that both the Itô and Stratonovich interpretations converge very slowly to the uniform equilibrium distribution for vanishing time step sizes. Three other conventions exhibit significantly improved accuracy: (i) the "isothermal" (Hänggi) convention, (ii) the Stratonovich convention corrected by a drift term, and (iii) a newly proposed convention employing two different effective friction coefficients representing two different averages of the friction function during the time step. We argue that the most physically accurate dynamical description is provided by the third convention, in which the particle experiences a drift originating from the dissipation instead of the fluctuation term. This feature is directly related to the fact that the drift is a result of an inertial effect that cannot be well understood in the Brownian, overdamped limit of the Langevin equation.
NASA Astrophysics Data System (ADS)
Silveira, Ana J.; Abreu, Charlles R. A.
2017-09-01
Sets of atoms collectively behaving as rigid bodies are often used in molecular dynamics to model entire molecules or parts thereof. This is a coarse-graining strategy that eliminates degrees of freedom and supposedly admits larger time steps without abandoning the atomistic character of a model. In this paper, we rely on a particular factorization of the rotation matrix to simplify the mechanical formulation of systems containing rigid bodies. We then propose a new derivation for the exact solution of torque-free rotations, which are employed as part of a symplectic numerical integration scheme for rigid-body dynamics. We also review methods for calculating pressure in systems of rigid bodies with pairwise-additive potentials and periodic boundary conditions. Finally, simulations of liquid phases, with special focus on water, are employed to analyze the numerical aspects of the proposed methodology. Our results show that energy drift is avoided for time step sizes up to 5 fs, but only if a proper smoothing is applied to the interatomic potentials. Despite this, the effects of discretization errors are relevant, even for smaller time steps. These errors induce, for instance, a systematic failure of the expected equipartition of kinetic energy between translational and rotational degrees of freedom.
NASA Astrophysics Data System (ADS)
Xiao, Bin; Tang, Yu; Ma, Guodong; Ma, Ning; Du, Piyi
2015-06-01
The microstructure-property relation in ferroelectric/ferromagnetic composite is investigated in detail, exemplified by typical sol-gel-derived 0.3BTO/0.7NZFO ceramic composite. The effect of microstructural factors including intergrain connectivity, grain size and interfaces on the dielectric and magnetic properties of the composite prepared by conventional ceramic method and three-step sintering method is discussed both experimentally and theoretically. It reveals that the dielectric behavior of the composite is controlled by a hybrid dielectric process that combines the contribution of Debye-like dipoles and Maxwell-Wagner (M-W or interfacial) polarization. Enhanced dielectric, magnetic and conductive behaviors appear in the composite with better intergrain connectivity and larger grain size derived by sol-gel route and three-step sintering method. The effective permittivity contributed by Debye-like dipoles exhibits a value of ~130,000 in three-step sintered composite, which is almost the same as that in conventionally sintered one, but that contributed by M-W response is much smaller in the former. Compared with conventionally prepared samples, the relaxation time ( τ) is 3.476 × 10-6 s, about one order of magnitude smaller, and the dc electrical conductivity is 3.890 × 10-3 S/m, one order of magnitude higher in three-step sintered composite. The minimum dielectric loss reveals almost the same (~0.2) for all samples, but shows distinguishable difference in low-frequency region. Meanwhile, an initial permeability of 84, twice as large as that of conventionally prepared composite and 56 % the value of single-phased NZFO ferrite (~150), and a saturation magnetization of 63.5 emu/g, 32 % higher than that of conventional one and approximately 84 % the value of single-phased NZFO ferrite (~76 emu/g), appear simultaneously in three-step sintered composite with larger grain size and better intergrain connectivity. It is clear that the discovery is helpful for establishing a more explicit view on the physics of multi-functional composite materials, while the composite with optimized microstructure is beneficial to be used as a high-performance material.
Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H
2014-05-28
Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intel's MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.
Seidel, Dirk; Gray, Lyle S; Heron, Gordon
2005-04-01
Decreased blur-sensitivity found in myopia has been linked with reduced accommodation responses and myopigenesis. Although the mechanism for myopia progression remains unclear, it is commonly known that myopic patients rarely report near visual symptoms and are generally very sensitive to small changes in their distance prescription. This experiment investigated the effect of monocular and binocular viewing on static and dynamic accommodation in emmetropes and myopes for real targets to monitor whether inaccuracies in the myopic accommodation response are maintained when a full set of visual cues, including size and disparity, is available. Monocular and binocular steady-state accommodation responses were measured with a Canon R1 autorefractor for target vergences ranging from 0-5 D in emmetropes (EMM), late-onset myopes (LOM), and early-onset myopes (EOM). Dynamic closed-loop accommodation responses for a stationary target at 0.25 m and step stimuli of two different magnitudes were recorded for both monocular and binocular viewing. All refractive groups showed similar accommodation stimulus response curves consistent with previously published data. Viewing a stationary near target monocularly, LOMs demonstrated slightly larger accommodation microfluctuations compared with EMMs and EOMs; however, this difference was absent under binocular viewing conditions. Dynamic accommodation step responses revealed significantly (p < 0.05) longer response times for the myopic subject groups for a number of step stimuli. No significant difference in either reaction time or the number of correct responses for a given number of step-vergence changes was found between the myopic groups and EMMs. When viewing real targets with size and disparity cues available, no significant differences in the accuracy of static and dynamic accommodation responses were found among EMM, EOM, and LOM. The results suggest that corrected myopes do not experience dioptric blur levels that are substantially different from emmetropes when they view free space targets.
Comparison of pedometer and accelerometer measures of free-living physical activity.
Tudor-Locke, Catrine; Ainsworth, Barbara E; Thompson, Raymond W; Matthews, Charles E
2002-12-01
The purpose of this investigation was 1) to evaluate agreement between dual-mode CSA accelerometer outputs and Yamax pedometer outputs assessed concurrently under free-living conditions; 2) to determine the relationship between pedometer-steps per day and CSA-time spent in inactivity and in light-, moderate-, and vigorous-intensity activities; and 3) to identify a value of pedometer-steps per day that corresponds with a minimum of 30 CSA-min x d(-1) of moderate ambulatory activity. Data were analyzed from 52 participants (27 men, 25 women; mean age = 38.2 +/- 12.0 yr; mean BMI = 26.4 +/- 4.5 kg x m(-2)) who were enrolled in the International Physical Activity Questionnaire study and wore both motion sensors during waking hours for 7 consecutive days. Participants averaged 415.0+/-159.5 CSA-counts x min(-1) x d(-1), 357,601 +/- 138,425 CSA-counts x d(-1), 11,483 +/- 3,856 CSA-steps x d(-1), and 9,638 +/- 4,030 pedometer-steps x d(-1). There was a strong relationship between all CSA outputs and pedometer outputs (r = 0.74-0.86). The mean difference in steps detected between instruments was 1845+/-2116 steps x d(-1) (CSA > pedometer; t = 6.29, P < 0.0001). There were distinct differences (effect sizes >0.80) in mean CSA-time (min x d(-1)) in moderate and vigorous activity with increasing pedometer-determined activity quartiles; no differences were noted for inactivity or light activity. Approximately 33 CSA-min x d(-1) of moderate activity corresponded with 8000 pedometer-steps x d(-1). Differences in mean steps per day detected may be due to differences in set instrument sensitivity thresholds and/or attachment. Additional studies with different populations are needed to confirm a recommended number of steps per day associated with the duration and intensity of public health recommendations for ambulatory activity.
The dispersion of particles in a separated backward-facing step flow
NASA Astrophysics Data System (ADS)
Ruck, B.; Makiola, B.
1991-05-01
Flows in technical and natural circuits often involve a particulate phase. To measure the dynamics of suspended, naturally resident or artificially seeded particles in the flow, optical measuring techniques, e.g., laser Doppler anemometry (LDA) can be used advantageously. In this paper the dispersion of particles in a single-sided backward-facing step flow is investigated by LDA. The investigation is of relevance for both, two-phase flow problems in separated flows with the associated particle diameter range of 1-70 μm and the accuracy of LDA with tracer particles of different sizes. The latter is of interest for all LDA applications to measure continuous phase properties, where interest for experimental restraints require tracer diameters in the upper micrometer range, e.g., flame resistant particles for measurements inside reactors, cylinders, etc. For the experiments, a closed-loop wind tunnel with a step expansion was used. Part of this tunnel, the test section, was made of glass. The step had a height H=25 mm (channel height before the step 25 mm, after 50 mm, i.e., an expansion ratio of 2). The width of the channel was 500 mm. The length of the glass test section was chosen as 116 step heights. The wind tunnel, driven by a radial fan, allowed flow velocities up to 50 m/sec which is equivalent to ReH=105. Seeding was performed with particles of well-known size: 1, 15, 30, and 70 μm in diameter. As 1 μm tracers oil droplets were used, whereas for the upper micron range starch particles (density 1.500 kg/m3) were chosen. Starch particles have a spherical shape and are not soluble in cold water. Particle velocities were measured locally using a conventional 1-D LDA system. The measurements deliver the resultant ``flow'' field information stemming from different particle size classes. Thus, the particle behavior in the separated flow field can be resolved. The results show that with increasing particle size, the particle velocity field differs increasingly from the flow field of the continuous phase (inferred from the smallest tracers used). The velocity fluctuations successively decrease with increasing particle diameter. In separation zones, bigger particles have a lower mean velocity than smaller ones. The opposite holds for the streamwise portions of the particle velocity field, where bigger particles show a higher velocity. The measurements give detailed insight into the particle dynamics in separated flow regions. LDA-measured dividing streamlines and lines of zero velocity of different particle classes in the recirculation region have been plotted and compared. In LDA the use of tracer particles in the upper micrometer size range leads to erroneous determinations of continuous phase flow characteristics. It turned out that the dimensions of the measured recirculation zones are reduced with increasing particle diameter. The physical reasons for these findings (relaxation time of particles, Stokes numbers, etc.) are explained in detail.
Earthquake models using rate and state friction and fast multipoles
NASA Astrophysics Data System (ADS)
Tullis, T.
2003-04-01
The most realistic current earthquake models employ laboratory-derived non-linear constitutive laws. These are the rate and state friction laws having both a non-linear viscous or direct effect and an evolution effect in which frictional resistance depends on time of stationary contact and has a memory of past slip velocity that fades with slip. The frictional resistance depends on the log of the slip velocity as well as the log of stationary hold time, and the fading memory involves an approximately exponential decay with slip. Due to the nonlinearly of these laws, analytical earthquake models are not attainable and numerical models are needed. The situation is even more difficult if true dynamic models are sought that deal with inertial forces and slip velocities on the order of 1 m/s as are observed during dynamic earthquake slip. Additional difficulties that exist if the dynamic slip phase of earthquakes is modeled arise from two sources. First, many physical processes might operate during dynamic slip, but they are only poorly understood, the relative importance of the processes is unknown, and the processes are even more nonlinear than those described by the current rate and state laws. Constitutive laws describing such behaviors are still being developed. Second, treatment of inertial forces and the influence that dynamic stresses from elastic waves may have on slip on the fault requires keeping track of the history of slip on remote parts of the fault as far into the past as it takes waves to travel from there. This places even more stringent requirements on computer time. Challenges for numerical modeling of complete earthquake cycles are that both time steps and mesh sizes must be small. Time steps must be milliseconds during dynamic slip, and yet models must represent earthquake cycles 100 years or more in length; methods using adaptive step sizes are essential. Element dimensions need to be on the order of meters, both to approximate continuum behavior adequately and to model microseismicity as well as large earthquakes. In order to model significant sized earthquakes this requires millions of elements. Modeling methods like the boundary element method that involve Green's functions normally require computation times that increase with the number N of elements squared, so using large N becomes impossible. We have adapted the Fast Multipole method to this problem in which the influence of sufficiently remote elements are grouped together and the elements are indexed such that the computations more efficient when run on parallel computers. Compute time varies with N log N rather than N squared. Computer programs are available that use this approach (http://www.servogrid.org/slide/GEM/PARK). Whether the multipole approach can be adapted to dynamic modeling is unclear.
Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie
2018-01-01
Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490
Sample Size Calculations for Micro-randomized Trials in mHealth
Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A.
2015-01-01
The use and development of mobile interventions are experiencing rapid growth. In “just-in-time” mobile interventions, treatments are provided via a mobile device and they are intended to help an individual make healthy decisions “in the moment,” and thus have a proximal, near future impact. Currently the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a “micro-randomized” trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. PMID:26707831
NASA Astrophysics Data System (ADS)
Sathya, Ayyappan; Kalyani, S.; Ranoo, Surojit; Philip, John
2017-10-01
To realize magnetic hyperthermia as an alternate stand-alone therapeutic procedure for cancer treatment, magnetic nanoparticles with optimal performance, within the biologically safe limits, are to be produced using simple, reproducible and scalable techniques. Herein, we present a simple, one-step approach for synthesis of water-dispersible magnetic nanoclusters (MNCs) of superparamagnetic iron oxide by reducing of Fe2(SO4)3 in sodium acetate (alkali), poly ethylene glycol (capping ligand), and ethylene glycol (solvent and reductant) in a microwave reactor. The average size and saturation magnetization of the MNC's are tuned from 27 to 52 nm and 32 to 58 emu/g by increasing the reaction time from 10 to 600 s. Transmission electron microscopy images reveal that each MNC composed of large number of primary Fe3O4 nanoparticles. The synthesised MNCs show excellent colloidal stability in aqueous phase due to the adsorbed PEG layer. The highest SAR value of 215 ± 10 W/gFe observed in 52 nm size MNC at a frequency of 126 kHz and field of 63 kA/m suggest the potential use of these MNC in hyperthermia applications. This study further opens up the possibilities to develop metal ion-doped MNCs with tunable sizes suitable for various biomedical applications using microwave assisted synthesis.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Applicability of corrosion control treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper...
Khara, Dinesh C; Berger, Yaron; Ouldridge, Thomas E
2018-01-01
Abstract We present a detailed coarse-grained computer simulation and single molecule fluorescence study of the walking dynamics and mechanism of a DNA bipedal motor striding on a DNA origami. In particular, we study the dependency of the walking efficiency and stepping kinetics on step size. The simulations accurately capture and explain three different experimental observations. These include a description of the maximum possible step size, a decrease in the walking efficiency over short distances and a dependency of the efficiency on the walking direction with respect to the origami track. The former two observations were not expected and are non-trivial. Based on this study, we suggest three design modifications to improve future DNA walkers. Our study demonstrates the ability of the oxDNA model to resolve the dynamics of complex DNA machines, and its usefulness as an engineering tool for the design of DNA machines that operate in the three spatial dimensions. PMID:29294083
Liu, Zhi-Hua; Chen, Hong-Zhang
2017-01-01
The simultaneous saccharification and fermentation (SSF) of corn stover biomass for ethanol production was performed by integrating steam explosion (SE) pretreatment, hydrolysis and fermentation. Higher SE pretreatment severity and two-step size reduction increased the specific surface area, swollen volume and water holding capacity of steam exploded corn stover (SECS) and hence facilitated the efficiency of hydrolysis and fermentation. The ethanol production and yield in SSF increased with the decrease of particle size and post-washing of SECS prior to fermentation to remove the inhibitors. Under the SE conditions of 1.5MPa and 9min using 2.0cm particle size, glucan recovery and conversion to glucose by enzymes were 86.2% and 87.2%, respectively. The ethanol concentration and yield were 45.0g/L and 85.6%, respectively. With this two-step size reduction and post-washing strategy, the water utilization efficiency, sugar recovery and conversion, and ethanol concentration and yield by the SSF process were improved. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nanobiological studies on drug design using molecular mechanic method.
Ghaheh, Hooria Seyedhosseini; Mousavi, Maryam; Araghi, Mahmood; Rasoolzadeh, Reza; Hosseini, Zahra
2015-01-01
Influenza H1N1 is very important worldwide and point mutations that occur in the virus gene are a threat for the World Health Organization (WHO) and druggists, since they could make this virus resistant to the existing antibiotics. Influenza epidemics cause severe respiratory illness in 30 to 50 million people and kill 250,000 to 500,000 people worldwide every year. Nowadays, drug design is not done through trial and error because of its cost and waste of time; therefore bioinformatics studies is essential for designing drugs. This paper, infolds a study on binding site of Neuraminidase (NA) enzyme, (that is very important in drug design) in 310K temperature and different dielectrics, for the best drug design. Information of NA enzyme was extracted from Protein Data Bank (PDB) and National Center for Biotechnology Information (NCBI) websites. The new sequences of N1 were downloaded from the NCBI influenza virus sequence database. Drug binding sites were assimilated and homologized modeling using Argus lab 4.0, HyperChem 6.0 and Chem. D3 softwares. Their stability was assessed in different dielectrics and temperatures. Measurements of potential energy (Kcal/mol) of binding sites of NA in different dielectrics and 310K temperature revealed that at time step size = 0 pSec drug binding sites have maximum energy level and at time step size = 100 pSec have maximum stability and minimum energy. Drug binding sites are more dependent on dielectric constants rather than on temperature and the optimum dielectric constant is 39/78.
Temperature controlled formation of lead/acid batteries
NASA Astrophysics Data System (ADS)
Bungardt, M.
At present, standard formation programs have to accommodate the worst case. This is important, especially in respect of variations in climatic conditions. The standard must be set so that during the hottest weather periods the maximum electrolyte temperature is not exceeded. As this value is defined not only by the desired properties and the recipe of the active mass, but also by type and size of the separators and by the dimensions of the plates, general rules cannot be formulated. It is considered to be advantageous to introduce limiting data for the maximum temperature into a general formation program. The latter is defined so that under normal to good ambient conditions the shortest formation time is achieved. If required, the temperature control will reduce the currents employed in the different steps, according to need, and will extend the formation time accordingly. With computer-controlled formation, these parameters can be readily adjusted to suit each type of battery and can also be reset according to modifications in the preceding processing steps. Such a procedure ensures that: (i) the formation time is minimum under the given ambient conditions; (ii) in the event of malpractice ( e.g. actual program not fitting to size) the batteries will not be destroyed; (iii) the energy consumption is minimized (note, high electrolyte temperature leads to excess gassing). These features are incorporated in the BA/FOS-500 battery formation system developed by Digatron. The operational characteristics of this system are listed in Table 1.
Milner, Phillip J; Martell, Jeffrey D; Siegelman, Rebecca L; Gygi, David; Weston, Simon C; Long, Jeffrey R
2018-01-07
Alkyldiamine-functionalized variants of the metal-organic framework Mg 2 (dobpdc) (dobpdc 4- = 4,4'-dioxidobiphenyl-3,3'-dicarboxylate) are promising for CO 2 capture applications owing to their unique step-shaped CO 2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary , secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO 2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO 2 adsorption/desorption profiles. This two-step behavior likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg 2 (dobpdc) and leads to decreased CO 2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg 2 (dotpdc) (dotpdc 4- = 4,4''-dioxido-[1,1':4',1''-terphenyl]-3,3''-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg 2 (pc-dobpdc) (pc-dobpdc 4- = 3,3'-dioxidobiphenyl-4,4'-dicarboxylate, pc = para -carboxylate), which, in contrast to Mg 2 (dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO 2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg 2 (pc-dobpdc) with large diamines such as N -( n -heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications.
Milner, Phillip J.; Martell, Jeffrey D.; Siegelman, Rebecca L.; ...
2017-10-26
Alkyldiamine-functionalized variants of the metal–organic framework Mg 2(dobpdc) (dobpdc 4- = 4,4'-dioxidobiphenyl-3,3'-dicarboxylate) are promising for CO 2 capture applications owing to their unique step-shaped CO 2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary,secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO 2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO 2 adsorption/desorption profiles. This two-step behaviormore » likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg 2(dobpdc) and leads to decreased CO 2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg 2(dotpdc) (dotpdc 4- = 4,4''-dioxido-[1,1':4',1''-terphenyl]-3,3''-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg 2(pc-dobpdc) (pc-dobpdc 4- = 3,3'-dioxidobiphenyl-4,4'-dicarboxylate, pc = para-carboxylate), which, in contrast to Mg 2(dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO 2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg 2(pc-dobpdc) with large diamines such as N-(n-heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications.« less
You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic.
McShane, Blakeley B; Böckenholt, Ulf
2014-11-01
Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2015-12-01
Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.
Temporal variability and memory in sediment transport in an experimental step-pool channel
NASA Astrophysics Data System (ADS)
Saletti, Matteo; Molnar, Peter; Zimmermann, André; Hassan, Marwan A.; Church, Michael
2015-11-01
Temporal dynamics of sediment transport in steep channels using two experiments performed in a steep flume (8%) with natural sediment composed of 12 grain sizes are studied. High-resolution (1 s) time series of sediment transport were measured for individual grain-size classes at the outlet of the flume for different combinations of sediment input rates and flow discharges. Our aim in this paper is to quantify (a) the relation of discharge and sediment transport and (b) the nature and strength of memory in grain-size-dependent transport. None of the simple statistical descriptors of sediment transport (mean, extreme values, and quantiles) display a clear relation with water discharge, in fact a large variability between discharge and sediment transport is observed. Instantaneous transport rates have probability density functions with heavy tails. Bed load bursts have a coarser grain-size distribution than that of the entire experiment. We quantify the strength and nature of memory in sediment transport rates by estimating the Hurst exponent and the autocorrelation coefficient of the time series for different grain sizes. Our results show the presence of the Hurst phenomenon in transport rates, indicating long-term memory which is grain-size dependent. The short-term memory in coarse grain transport increases with temporal aggregation and this reveals the importance of the sampling duration of bed load transport rates in natural streams, especially for large fractions.
AMOEBA clustering revisited. [cluster analysis, classification, and image display program
NASA Technical Reports Server (NTRS)
Bryant, Jack
1990-01-01
A description of the clustering, classification, and image display program AMOEBA is presented. Using a difficult high resolution aircraft-acquired MSS image, the steps the program takes in forming clusters are traced. A number of new features are described here for the first time. Usage of the program is discussed. The theoretical foundation (the underlying mathematical model) is briefly presented. The program can handle images of any size and dimensionality.
Screening of cyanobacterial extracts for synthesis of silver nanoparticles.
Husain, Shaheen; Sardar, Meryam; Fatma, Tasneem
2015-08-01
Improvement of reliable and eco-friendly process for synthesis of metallic nanoparticles is a significant step in the field of application nanotechnology. One approach that shows vast potential is based on the biosynthesis of nanoparticles using micro-organisms. In this study, biosynthesis of silver nanoparticles (AgNP) using 30 cyanobacteria were investigated. Cyanobacterial aqueous extracts were subjected to AgNP synthesis at 30 °C. Scanning of these aqueous extracts containing AgNP in UV-Visible range showed single peak. The λ max for different extracts varied and ranged between 440 and 490 nm that correspond to the "plasmon absorbance" of AgNP. Micrographs from scanning electron microscope of AgNP from cyanobacterial extracts showed that though synthesis of nanoparticles occurred in all strains but their reaction time, shape and size varied. Majority of the nanoparticles were spherical. Time taken for induction of nanoparticles synthesis by cyanobacterial extracts ranged from 30 to 360 h and their size from 38 to 88 nm. In terms of size Cylindrospermum stagnale NCCU-104 was the best organism with 38 and 40 nm. But in terms of time Microcheate sp. NCCU-342 was the best organism as it took 30 h for AgNP synthesis.
Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popeski-Dimovski, Riste
Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.
Shear Melting of a Colloidal Glass
NASA Astrophysics Data System (ADS)
Eisenmann, Christoph; Kim, Chanjoong; Mattsson, Johan; Weitz, David A.
2010-01-01
We use confocal microscopy to explore shear melting of colloidal glasses, which occurs at strains of ˜0.08, coinciding with a strongly non-Gaussian step size distribution. For larger strains, the particle mean square displacement increases linearly with strain and the step size distribution becomes Gaussian. The effective diffusion coefficient varies approximately linearly with shear rate, consistent with a modified Stokes-Einstein relationship in which thermal energy is replaced by shear energy and the length scale is set by the size of cooperatively moving regions consisting of ˜3 particles.
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Hydrothermal pre-treatment of oil palm empty fruit bunch into fermentable sugars
NASA Astrophysics Data System (ADS)
Muhd Ali, M. D.; Tamunaidu, P.; Nor Aslan, A. K. H.; Morad, N. A.; Sugiura, N.; Goto, M.; Zhang, Z.
2016-06-01
Presently oil palm empty fruit bunch (OPEFB) is one of the solid waste which is produced daily whereby it is usually left at plantation site to act as organic fertilizer for the plants to ensure the sustainability of fresh fruit bunch. The major drawback in biomass conversion technology is the difficulty of degrading the material in a short period of time. A pre-treatment step is required to break the lignocellulosic biomass to easily accessible carbon sources for further use in the production of fuels and fine chemicals. Therefore, this study investigated the effect of hydrothermal pre-treatment under different reaction temperatures (100 - 250°C), reaction time (10 - 40 min), solid to solvent ratio of (1:10 - 1:20 w/v) and particle size (0.15 - 1.00 mm) on the solubilization of OPEFB to produce soluble fermentable sugars. The maximum soluble sugars of 68.18 mg glucose per gram of OPEFB were achieved at 175°C of reaction temperature, 20 min of reaction time, 1:15 w/v of solid to solvent ratio for 30 mm of particle size. Results suggest that reaction temperature, reaction time, the amount of solid to solvent ratio and size of the particle are crucial parameters for hydrothermal pretreatment, in achieving a high yield of soluble fermentable sugars.
Weakly superconducting, thin-film structures as radiation detectors.
NASA Technical Reports Server (NTRS)
Kirschman, R. K.
1972-01-01
Measurements were taken with weakly superconducting quantum structures of the Notarys-Mercereau type, representing a thin superconductor film with a short region that is weakened in the sense that its transition temperature is lower than in the remaining portion of the film. The structure acts as a superconducting relaxation oscillator in which the supercurrent increases with time until the critical current of the weakened section is attained, at which moment the supercurrent decays and the cycle repeats. Under applied radiation, a series of constant-voltage steps appears in the current-voltage curve, and the size of the steps varies periodically with the amplitude of applied radiation. Measurements of the response characteristics were made in the frequency range of 10 to 450 MHz.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf
2013-08-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.
[A focused sound field measurement system by LabVIEW].
Jiang, Zhan; Bai, Jingfeng; Yu, Ying
2014-05-01
In this paper, according to the requirement of the focused sound field measurement, a focused sound field measurement system was established based on the LabVIEW virtual instrument platform. The system can automatically search the focus position of the sound field, and adjust the scanning path according to the size of the focal region. Three-dimensional sound field scanning time reduced from 888 hours in uniform step to 9.25 hours in variable step. The efficiency of the focused sound field measurement was improved. There is a certain deviation between measurement results and theoretical calculation results. Focal plane--6 dB width difference rate was 3.691%, the beam axis--6 dB length differences rate was 12.937%.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-01-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484
Modeling snail breeding in a bioregenerative life support system
NASA Astrophysics Data System (ADS)
Kovalev, V. S.; Manukovsky, N. S.; Tikhomirov, A. A.; Kolmakova, A. A.
2015-07-01
The discrete-time model of snail breeding consists of two sequentially linked submodels: "Stoichiometry" and "Population". In both submodels, a snail population is split up into twelve age groups within one year of age. The first submodel is used to simulate the metabolism of a single snail in each age group via the stoichiometric equation; the second submodel is used to optimize the age structure and the size of the snail population. Daily intake of snail meat by crewmen is a guideline which specifies the population productivity. The mass exchange of the snail unit inhabited by land snails of Achatina fulica is given as an outcome of step-by-step modeling. All simulations are performed using Solver Add-In of Excel 2007.
Variation of nanopore diameter along porous anodic alumina channels by multi-step anodization.
Lee, Kwang Hong; Lim, Xin Yuan; Wai, Kah Wing; Romanato, Filippo; Wong, Chee Cheong
2011-02-01
In order to form tapered nanocapillaries, we investigated a method to vary the nanopore diameter along the porous anodic alumina (PAA) channels using multi-step anodization. By anodizing the aluminum in either single acid (H3PO4) or multi-acid (H2SO4, oxalic acid and H3PO4) with increasing or decreasing voltage, the diameter of the nanopore along the PAA channel can be varied systematically corresponding to the applied voltages. The pore size along the channel can be enlarged or shrunken in the range of 20 nm to 200 nm. Structural engineering of the template along the film growth direction can be achieved by deliberately designing a suitable voltage and electrolyte together with anodization time.
NASA Astrophysics Data System (ADS)
Ebrahimi-Kahrizsangi, Reza; Nasiri-Tabrizi, Bahman; Chami, Akbar
2010-09-01
In this paper, synthesis of bionanocomposite of fluorapatite-titania (FAp-TiO 2) was studied by using one step mechanochemical process. Characterization of the products was accomplished by X-ray diffraction (XRD), Fourier transform infrared (FT-IR) spectroscopy, energy dispersive X-ray spectroscopy (EDX), scanning electron microscopy (SEM), and transmission electron microscopy (TEM) techniques. Based on XRD patterns and FT-IR spectroscopy, correlation between the structural features of the nanostructured FAp-TiO 2 and the process conditions was discussed. Variations in crystallite size, lattice strain, and volume fraction of grain boundary were investigated during milling and the following heat treatment. Crystallization of the nanocomposite occurred after thermal treatment at 650 °C. Morphological features of powders were influenced by the milling time. The resulting FAp-20 wt.%TiO 2 nanocomposite powder exhibited an average particle size of 15 nm after 20 h of milling. The results show that the one step mechanosynthesis technique is an effective route to prepare FAp-based nanocomposites with excellent morphological and structural features.
Automated image segmentation-assisted flattening of atomic force microscopy images.
Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin
2018-01-01
Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.
Formation of enriched black tea extract loaded chitosan nanoparticles via electrospraying
NASA Astrophysics Data System (ADS)
Hammond, Samuel James
Creating nanoparticles of beneficial nutraceuticals and pharmaceuticals has had a large surge of research due to the enhancement of absorption and bioavailability by decreasing their size. One of these ways is by electrohydrodynamic atomization, also known as electrospraying. In general, this novel process is done by forcing a liquid through a capillary nozzle and which is subjected to an electrical field. While there are different ways to create nanoparticles, the novel method of electrospraying can be beneficial over other types of nanoparticle formation. Reasons include high control over particle size and distribution by altering electrospray parameters (voltage, flow rate, distance, and time), higher encapsulation efficiency than other methods, and also it is a one step process without exposure to extreme conditions (Gomez-Estaca et. al. 2012, Jaworek and Sobcyzk 2008). The current study aimed to create a chitosan encapsulated theaflavin-2 enriched black tea extract (BTE) nanoparticles via electrospraying. The first step of this process was to create the smallest chitosan nanoparticles possible by altering the electrospray parameters and the chitosan-acetic acid solution parameters. The solution properties altered include chitosan molecular weight, acetic acid concentration, and chitosan concentration. Specifically, the electrospray parameters such as voltage, flow rate and distance from syringe to collector are the most important in determining particle size. After creating the smallest chitosan particles, the TF-2 enriched black tea extract was added to the chitosan-acetic acid solution to be electrosprayed. The particles were assessed with the following procedures: Atomic force microscopy (AFM) and scanning electron microscopy (SEM) for particle morphology and size, and loading efficiency with ultraviolet--visible spectrophotometer (UV-VIS). Chitosan-BTE nanoparticles were successfully created in a one step process. Diameter of the particles on average ranged from 255 nm to 560 nm. Encapsulation efficiency was above 95% for all but one sample set. Future work includes MTT assay and cellular uptake.
Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery
NASA Technical Reports Server (NTRS)
Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj
1994-01-01
Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.
Study of CdTe quantum dots grown using a two-step annealing method
NASA Astrophysics Data System (ADS)
Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2006-02-01
High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
NASA Astrophysics Data System (ADS)
Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.
2014-06-01
In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.
Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine
2018-01-01
Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.
Woods, H Arthur; Dillon, Michael E; Pincebourde, Sylvain
2015-12-01
We analyze the effects of changing patterns of thermal availability, in space and time, on the performance of small ectotherms. We approach this problem by breaking it into a series of smaller steps, focusing on: (1) how macroclimates interact with living and nonliving objects in the environment to produce a mosaic of thermal microclimates and (2) how mobile ectotherms filter those microclimates into realized body temperatures by moving around in them. Although the first step (generation of mosaics) is conceptually straightforward, there still exists no general framework for predicting spatial and temporal patterns of microclimatic variation. We organize potential variation along three axes-the nature of the objects producing the microclimates (abiotic versus biotic), how microclimates translate macroclimatic variation (amplify versus buffer), and the temporal and spatial scales over which microclimatic conditions vary (long versus short). From this organization, we propose several general rules about patterns of microclimatic diversity. To examine the second step (behavioral sampling of locally available microclimates), we construct a set of models that simulate ectotherms moving on a thermal landscape according to simple sets of diffusion-based rules. The models explore the effects of both changes in body size (which affect the time scale over which organisms integrate operative body temperatures) and increases in the mean and variance of temperature on the thermal landscape. Collectively, the models indicate that both simple behavioral rules and interactions between body size and spatial patterns of thermal variation can profoundly affect the distribution of realized body temperatures experienced by ectotherms. These analyses emphasize the rich set of problems still to solve before arriving at a general, predictive theory of the biological consequences of climate change. Copyright © 2014 Elsevier Ltd. All rights reserved.
Real-time traffic sign detection and recognition
NASA Astrophysics Data System (ADS)
Herbschleb, Ernst; de With, Peter H. N.
2009-01-01
The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput
Roch, Samuel; Brinker, Alexander
2017-04-18
The rising evidence of microplastic pollution impacts on aquatic organisms in both marine and freshwater ecosystems highlights a pressing need for adequate and comparable detection methods. Available tissue digestion protocols are time-consuming (>10 h) and/or require several procedural steps, during which materials can be lost and contaminants introduced. This novel approach comprises an accelerated digestion step using sodium hydroxide and nitric acid in combination to digest all organic material within 1 h plus an additional separation step using sodium iodide which can be used to reduce mineral residues in samples where necessary. This method yielded a microplastic recovery rate of ≥95%, and all tested polymer types were recovered with only minor changes in weight, size, and color with the exception of polyamide. The method was also shown to be effective on field samples from two benthic freshwater fish species, revealing a microplastic burden comparable to that indicated in the literature. As a consequence, the present method saves time, minimizes the loss of material and the risk of contamination, and facilitates the identification of plastic particles and fibers, thus providing an efficient method to detect and quantify microplastics in the gastrointestinal tract of fishes.
Multiple stage miniature stepping motor
Niven, William A.; Shikany, S. David; Shira, Michael L.
1981-01-01
A stepping motor comprising a plurality of stages which may be selectively activated to effect stepping movement of the motor, and which are mounted along a common rotor shaft to achieve considerable reduction in motor size and minimum diameter, whereby sequential activation of the stages results in successive rotor steps with direction being determined by the particular activating sequence followed.
Kurz, Ilan; Gimmon, Yoav; Shapiro, Amir; Debi, Ronen; Snir, Yoram; Melzer, Itshak
2016-03-04
Falls are common among elderly, most of them occur while slipping or tripping during walking. We aimed to explore whether a training program that incorporates unexpected loss of balance during walking able to improve risk factors for falls. In a double-blind randomized controlled trial 53 community dwelling older adults (age 80.1±5.6 years), were recruited and randomly allocated to an intervention group (n = 27) or a control group (n = 26). The intervention group received 24 training sessions over 3 months that included unexpected perturbation of balance exercises during treadmill walking. The control group performed treadmill walking with no perturbations. The primary outcome measures were the voluntary step execution times, traditional postural sway parameters and Stabilogram-Diffusion Analysis. The secondary outcome measures were the fall efficacy Scale (FES), self-reported late life function (LLFDI), and Performance-Oriented Mobility Assessment (POMA). Compared to control, participation in intervention program that includes unexpected loss of balance during walking led to faster Voluntary Step Execution Times under single (p = 0.002; effect size [ES] =0.75) and dual task (p = 0.003; [ES] = 0.89) conditions; intervention group subjects showed improvement in Short-term Effective diffusion coefficients in the mediolateral direction of the Stabilogram-Diffusion Analysis under eyes closed conditions (p = 0.012, [ES] = 0.92). Compared to control there were no significant changes in FES, LLFDI, and POMA. An intervention program that includes unexpected loss of balance during walking can improve voluntary stepping times and balance control, both previously reported as risk factors for falls. This however, did not transferred to a change self-reported function and FES. ClinicalTrials.gov NCT01439451 .
Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations
Casulli, V.; Cheng, R.T.
1990-01-01
In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.
The value of shoe size for prediction of the timing of the pubertal growth spurt
2011-01-01
Background Knowing the timing of the pubertal growth spurt of the spine, represented by sitting height, is essential for the prognosis and therapy of adolescent idiopathic scoliosis. There are several indicators that reflect growth or remaining growth of the patient. For example, distal body parts have their growth spurt earlier in adolescence, and therefore the growth of the foot can be an early indicator for the growth spurt of sitting height. Shoe size is a good alternative for foot length, since patients can remember when they bought new shoes and what size these shoes were. Therefore the clinician already has access to some longitudinal data at the first visit of the patient to the outpatient clinic. The aim of this study was to describe the increase in shoe size during adolescence and to determine whether the timing of the peak increase could be an early indicator for the timing of the peak growth velocity of sitting height. Methods Data concerning shoe sizes of girls and boys were acquired from two large shoe shops from 1991 to 2008. The longitudinal series of 242 girls and 104 boys were analysed for the age of the "peak increase" in shoe size, as well as the age of cessation of foot growth based on shoe size. Results The average peak increase in shoe size occurred at 10.4 years (SD 1.1) in girls and 11.5 years (SD 1.5) in boys. This was on average 1.3 years earlier than the average peak growth velocity of sitting height in girls, and 2.5 years earlier in boys. The increase in shoe size diminishes when the average peak growth velocity of sitting height takes place at respectively 12.0 (SD 0.8) years in girls, and 13.7 (SD 1.0) years in boys. Conclusions Present data suggest that the course of the shoe size of children visiting the outpatient clinic can be a useful first tool for predicting the timing of the pubertal growth spurt of sitting height, as a representative for spinal length. This claim needs verification by direct comparison of individual shoe size and sitting height data and than a step forward can be made in clinical decision making regarding adolescent idiopathic scoliosis. PMID:21251310
A glitch in the millisecond pulsar J0613-0200
NASA Astrophysics Data System (ADS)
McKee, J. W.; Janssen, G. H.; Stappers, B. W.; Lyne, A. G.; Caballero, R. N.; Lentati, L.; Desvignes, G.; Jessner, A.; Jordan, C. A.; Karuppusamy, R.; Kramer, M.; Cognard, I.; Champion, D. J.; Graikou, E.; Lazarus, P.; Osłowski, S.; Perrodin, D.; Shaifullah, G.; Tiburzi, C.; Verbiest, J. P. W.
2016-09-01
We present evidence for a small glitch in the spin evolution of the millisecond pulsar J0613-0200, using the EPTA Data Release 1.0, combined with Jodrell Bank analogue filterbank times of arrival (TOAs) recorded with the Lovell telescope and Effelsberg Pulsar Observing System TOAs. A spin frequency step of 0.82(3) nHz and frequency derivative step of -1.6(39) × 10-19 Hz s-1 are measured at the epoch of MJD 50888(30). After PSR B1821-24A, this is only the second glitch ever observed in a millisecond pulsar, with a fractional size in frequency of Δν/ν = 2.5(1) × 10-12, which is several times smaller than the previous smallest glitch. PSR J0613-0200 is used in gravitational wave searches with pulsar timing arrays, and is to date only the second such pulsar to have experienced a glitch in a combined 886 pulsar-years of observations. We find that accurately modelling the glitch does not impact the timing precision for pulsar timing array applications. We estimate that for the current set of millisecond pulsars included in the International Pulsar Timing Array, there is a probability of ˜50 per cent that another glitch will be observed in a timing array pulsar within 10 years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yang; Liu, Xiao Wei; Zhang, Hai Feng, E-mail: wy3121685@163.com
In this work, we present a method of fabricating super-hydrophobic surface on aluminum alloy substrate. The etching of aluminum surfaces has been performed using Beck's dislocation etchant for different time to create micrometer-sized irregular steps. An optimised etching time of 50 s is found to be essential before polytetrafluoroethylene (PTFE) coating, to obtain a highest water contact angle of 165±2° with a lowest contact angle hysteresis as low as 5±2°. The presence of patterned microstructure as revealed by scanning electron microscopy (SEM) together with the low surface energy ultrathin RF-sputtered PTFE films renders the aluminum alloy surfaces highly super-hydrophobic.
The exact fundamental solution for the Benes tracking problem
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam
2009-05-01
The universal continuous-discrete tracking problem requires the solution of a Fokker-Planck-Kolmogorov forward equation (FPKfe) for an arbitrary initial condition. Using results from quantum mechanics, the exact fundamental solution for the FPKfe is derived for the state model of arbitrary dimension with Benes drift that requires only the computation of elementary transcendental functions and standard linear algebra techniques- no ordinary or partial differential equations need to be solved. The measurement process may be an arbitrary, discrete-time nonlinear stochastic process, and the time step size can be arbitrary. Numerical examples are included, demonstrating its utility in practical implementation.
Cervera, R P; Garcia-Ximénez, F
2003-10-01
The purpose of this study was to test the effectiveness of one two-step (A) and two one-step (B1 and B2) vitrification procedures on denuded expanded or hatching rabbit blastocysts held in standard sealed plastic straws as a possible model for human blastocysts. The effect of blastocyst size was also studied on the basis of three size categories (I: diameter <200 micro m; II: diameter 200-299 micro m; III: diameter >/==" BORDER="0">300 micro m). Rabbit expanded or hatching blastocysts were vitrified at day 4 or 5. Before vitrification, the zona pellucida was removed using acidic phosphate buffered saline. For the two-step procedure, prior to vitrification, blastocysts were pre- equilibrated in a solution containing 10% dimethyl sulphoxide (DMSO) and 10% ethylene glycol (EG) for 1 min. Different final vitrification solutions were compared: 20% DMSO and 20% EG with (A and B1) or without (B2) 0.5 mol/l sucrose. Of 198 vitrified blastocysts, 181 (91%) survived, regardless of the vitrification procedure applied. Vitrification procedure A showed significantly higher re-expansion (88%), attachment (86%) and trophectoderm outgrowth (80%) rates than the two one-step vitrification procedures, B1 and B2 (46 and 21%, 20 and 33%, and 18 and 23%, respectively). After warming, blastocysts of greater size (II and III) showed significantly higher attachment (54 and 64%) and trophectoderm outgrowth (44 and 58%) rates than smaller blastocysts (I, attachment: 29%; trophectoderm outgrowth: 25%). These result demonstrate that denuded expanded or hatching rabbit blastocysts of greater size can be satisfactorily vitrified by use of a two-step procedure. The similarity of vitrification solutions used in humans could make it feasible to test such a procedure on human denuded blastocysts of different sizes.
Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters
NASA Astrophysics Data System (ADS)
Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi
A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.
Durham, W.B.; McKinnon, W.B.; Stern, L.A.
2005-01-01
Hydrostatic compaction of granulated water ice was measured in laboratory experiments at temperatures 77 K to 120 K. We performed step-wise hydrostatic pressurization tests on 5 samples to maximum pressures P of 150 MPa, using relatively tight (0.18-0.25 mm) and broad (0.25-2.0 mm) starting grain-size distributions. Compaction change of volume is highly nonlinear in P, typical for brittle, granular materials. No time-dependent creep occurred on the lab time scale. Significant residual porosity (???0.10) remains even at highest P. Examination by scanning electron microscopy (SEM) reveals a random configuration of fractures and broad distribution of grain sizes, again consistent with brittle behavior. Residual porosity appears as smaller, well-supported micropores between ice fragments. Over the interior pressures found in smaller midsize icy satellites and Kuiper Belt objects (KBOs), substantial porosity can be sustained over solar system history in the absence of significant heating and resultant sintering. Copyright 2005 by the American Geophysical Union.
Temporal compressive imaging for video
NASA Astrophysics Data System (ADS)
Zhou, Qun; Zhang, Linxia; Ke, Jun
2018-01-01
In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.
Voter dynamics on an adaptive network with finite average connectivity
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Abhishek; Schmittmann, Beate
2009-03-01
We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.
NASA Astrophysics Data System (ADS)
Lee, Ki Bang
2006-11-01
Two-step activation of paper batteries has been successfully demonstrated to provide quick activation and to supply high power to credit card-sized biosystems on a plastic chip. A stack of a magnesium layer (an anode), a fluid guide (absorbent paper), a highly doped filter paper with copper chloride (a cathode) and a copper layer as a current collector is laminated between two transparent plastic films into a high power biofluid- and water-activated battery. The battery is activated by two-step activation: (1) after placing a drop of biofluid/water-based solution on the fluid inlet, the surface tension first drives the fluid to soak the fluid guide; (2) the fluid in the fluid guide then penetrates into the heavily doped filter paper with copper chloride to start the battery reaction. The fabricated half credit card-sized battery was activated by saliva, urine and tap water and delivered a maximum voltage of 1.56 V within 10 s after activation and a maximum power of 15.6 mW. When 10 kΩ and 1 KΩ loads are used, the service time with water, urine and saliva is measured as more than 2 h. An in-series battery of 3 V has been successfully tested to power two LEDs (light emitting diodes) and an electric driving circuit. As such, this high power paper battery could be integrated with on-demand credit card-sized biosystems such as healthcare test kits, biochips, lab-on-a-chip, DNA chips, protein chips or even test chips for water quality checking or chemical checking.
NASA Astrophysics Data System (ADS)
Bordiga, M.; Henderiks, J.; Tori, F.; Monechi, S.; Fenero, R.; Legarda-Lisarri, A.; Thomas, E.
2015-09-01
The biotic response of calcareous nannoplankton to environmental and climatic changes during the Eocene-Oligocene transition was investigated at a high resolution at Ocean Drilling Program (ODP) Site 1263 (Walvis Ridge, southeast Atlantic Ocean) and compared with a lower-resolution benthic foraminiferal record. During this time interval, global climate, which had been warm under high levels of atmospheric CO2 (pCO2) during the Eocene, transitioned into the cooler climate of the Oligocene, at overall lower pCO2. At Site 1263, the absolute nannofossil abundance (coccoliths per gram of sediment; N g-1) and the mean coccolith size decreased distinctly after the E-O boundary (EOB; 33.89 Ma), mainly due to a sharp decline in abundance of large-sized Reticulofenestra and Dictyococcites, occurring within a time span of ~ 47 kyr. Carbonate dissolution did not vary much across the EOB; thus, the decrease in abundance and size of nannofossils may reflect an overall decrease in their export production, which could have led to variations in the food availability for benthic foraminifers. The benthic foraminiferal assemblage data are consistent with a global decline in abundance of rectilinear species with complex apertures in the latest Eocene (~ 34.5 Ma), potentially reflecting changes in the food source, i.e., phytoplankton. This was followed by a transient increased abundance of species indicative of seasonal delivery of food to the sea floor (Epistominella spp.; ~ 33.9-33.4 Ma), with a short peak in overall food delivery at the EOB (buliminid taxa; ~ 33.8 Ma). Increased abundance of Nuttallides umbonifera (at ~ 33.3 Ma) indicates the presence of more corrosive bottom waters and possibly the combined arrival of less food at the sea floor after the second step of cooling (Step 2). The most important changes in the calcareous nannofossil and benthic communities occurred ~ 120 kyr after the EOB. There was no major change in nannofossil abundance or assemblage composition at Site 1263 after Step 2 although benthic foraminifera indicate more corrosive bottom waters during this time. During the onset of latest-Eocene-earliest-Oligocene climate change, marine phytoplankton thus showed high sensitivity to fast-changing conditions as well as to a possibly enhanced, pulsed nutrient supply and to the crossing of a climatic threshold (e.g., pCO2 decline, high-latitude cooling and changes in ocean circulation).
NASA Astrophysics Data System (ADS)
Zhou, Yali; Zhang, Qizhi; Yin, Yixin
2015-05-01
In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.
More realistic power estimation for new user, active comparator studies: an empirical example.
Gokhale, Mugdha; Buse, John B; Pate, Virginia; Marquis, M Alison; Stürmer, Til
2016-04-01
Pharmacoepidemiologic studies are often expected to be sufficiently powered to study rare outcomes, but there is sequential loss of power with implementation of study design options minimizing bias. We illustrate this using a study comparing pancreatic cancer incidence after initiating dipeptidyl-peptidase-4 inhibitors (DPP-4i) versus thiazolidinediones or sulfonylureas. We identified Medicare beneficiaries with at least one claim of DPP-4i or comparators during 2007-2009 and then applied the following steps: (i) exclude prevalent users, (ii) require a second prescription of same drug, (iii) exclude prevalent cancers, (iv) exclude patients age <66 years and (v) censor for treatment changes during follow-up. Power to detect hazard ratios (effect measure strongly driven by the number of events) ≥ 2.0 estimated after step 5 was compared with the naïve power estimated prior to step 1. There were 19,388 and 28,846 DPP-4i and thiazolidinedione initiators during 2007-2009. The number of drug initiators dropped most after requiring a second prescription, outcomes dropped most after excluding patients with prevalent cancer and person-time dropped most after requiring a second prescription and as-treated censoring. The naïve power (>99%) was considerably higher than the power obtained after the final step (~75%). In designing new-user active-comparator studies, one should be mindful how steps minimizing bias affect sample-size, number of outcomes and person-time. While actual numbers will depend on specific settings, application of generic losses in percentages will improve estimates of power compared with the naive approach mostly ignoring steps taken to increase validity. Copyright © 2015 John Wiley & Sons, Ltd.
Method and apparatus for sizing and separating warp yarns using acoustical energy
Sheen, S.H.; Chien, H.T.; Raptis, A.C.; Kupperman, D.S.
1998-05-19
A slashing process is disclosed for preparing warp yarns for weaving operations including the steps of sizing and/or desizing the yarns in an acoustic resonance box and separating the yarns with a leasing apparatus comprised of a set of acoustically agitated lease rods. The sizing step includes immersing the yarns in a size solution contained in an acoustic resonance box. Acoustic transducers are positioned against the exterior of the box for generating an acoustic pressure field within the size solution. Ultrasonic waves that result from the acoustic pressure field continuously agitate the size solution to effect greater mixing and more uniform application and penetration of the size onto the yarns. The sized yarns are then separated by passing the warp yarns over and under lease rods. Electroacoustic transducers generate acoustic waves along the longitudinal axis of the lease rods, creating a shearing motion on the surface of the rods for splitting the yarns. 2 figs.
Combinative Particle Size Reduction Technologies for the Production of Drug Nanocrystals
Salazar, Jaime; Müller, Rainer H.; Möschwitzer, Jan P.
2014-01-01
Nanosizing is a suitable method to enhance the dissolution rate and therefore the bioavailability of poorly soluble drugs. The success of the particle size reduction processes depends on critical factors such as the employed technology, equipment, and drug physicochemical properties. High pressure homogenization and wet bead milling are standard comminution techniques that have been already employed to successfully formulate poorly soluble drugs and bring them to market. However, these techniques have limitations in their particle size reduction performance, such as long production times and the necessity of employing a micronized drug as the starting material. This review article discusses the development of combinative methods, such as the NANOEDGE, H 96, H 69, H 42, and CT technologies. These processes were developed to improve the particle size reduction effectiveness of the standard techniques. These novel technologies can combine bottom-up and/or top-down techniques in a two-step process. The combinative processes lead in general to improved particle size reduction effectiveness. Faster production of drug nanocrystals and smaller final mean particle sizes are among the main advantages. The combinative particle size reduction technologies are very useful formulation tools, and they will continue acquiring importance for the production of drug nanocrystals. PMID:26556191
Quantification of the evolution of firm size distributions due to mergers and acquisitions.
Lera, Sandro Claudio; Sornette, Didier
2017-01-01
The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company's own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes.
MCNP Output Data Analysis with ROOT (MODAR)
NASA Astrophysics Data System (ADS)
Carasco, C.
2010-06-01
MCNP Output Data Analysis with ROOT (MODAR) is a tool based on CERN's ROOT software. MODAR has been designed to handle time-energy data issued by MCNP simulations of neutron inspection devices using the associated particle technique. MODAR exploits ROOT's Graphical User Interface and functionalities to visualize and process MCNP simulation results in a fast and user-friendly way. MODAR allows to take into account the detection system time resolution (which is not possible with MCNP) as well as detectors energy response function and counting statistics in a straightforward way. Program summaryProgram title: MODAR Catalogue identifier: AEGA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 155 373 No. of bytes in distributed program, including test data, etc.: 14 815 461 Distribution format: tar.gz Programming language: C++ Computer: Most Unix workstations and PC Operating system: Most Unix systems, Linux and windows, provided the ROOT package has been installed. Examples where tested under Suse Linux and Windows XP. RAM: Depends on the size of the MCNP output file. The example presented in the article, which involves three two-dimensional 139×740 bins histograms, allocates about 60 MB. These data are running under ROOT and include consumption by ROOT itself. Classification: 17.6 External routines: ROOT version 5.24.00 ( http://root.cern.ch/drupal/) Nature of problem: The output of an MCNP simulation is an ASCII file. The data processing is usually performed by copying and pasting the relevant parts of the ASCII file into Microsoft Excel. Such an approach is satisfactory when the quantity of data is small but is not efficient when the size of the simulated data is large, for example when time-energy correlations are studied in detail such as in problems involving the associated particle technique. In addition, since the finite time resolution of the simulated detector cannot be modeled with MCNP, systems in which time-energy correlation is crucial cannot be described in a satisfactory way. Finally, realistic particle energy deposit in detectors is calculated with MCNP in a two-step process involving type-5 then type-8 tallies. In the first step, the photon flux energy spectrum associated to a time region is selected and serves as a source energy distribution for the second step. Thus, several files must be manipulated before getting the result, which can be time consuming if one needs to study several time regions or different detectors performances. In the same way, modeling counting statistics obtained in a limited acquisition time requires several steps and can also be time consuming. Solution method: In order to overcome the previous limitations, the MODAR C++ code has been written to make use of CERN's ROOT data analysis software. MCNP output data are read from the MCNP output file with dedicated routines. Two-dimensional histograms are filled and can be handled efficiently within the ROOT framework. To keep a user friendly analysis tool, all processing and data display can be done by means of ROOT Graphical User Interface. Specific routines have been written to include detectors finite time resolution and energy response function as well as counting statistics in a straightforward way. Additional comments: The possibility of adding tallies has also been incorporated in MODAR in order to describe systems in which the signal from several detectors can be summed. Moreover, MODAR can be adapted to handle other problems involving two-dimensional data. Running time: The CPU time needed to smear a two-dimensional histogram depends on the size of the histogram. In the presented example, the time-energy smearing of one of the 139×740 two-dimensional histograms takes 3 minutes with a DELL computer equipped with INTEL Core 2.
A flow-free droplet-based device for high throughput polymorphic crystallization.
Yang, Shih-Mo; Zhang, Dapeng; Chen, Wang; Chen, Shih-Chi
2015-06-21
Crystallization is one of the most crucial steps in the process of pharmaceutical formulation. In recent years, emulsion-based platforms have been developed and broadly adopted to generate high quality products. However, these conventional approaches such as stirring are still limited in several aspects, e.g., unstable crystallization conditions and broad size distribution; besides, only simple crystal forms can be produced. In this paper, we present a new flow-free droplet-based formation process for producing highly controlled crystallization with two examples: (1) NaCl crystallization reveals the ability to package saturated solution into nanoliter droplets, and (2) glycine crystallization demonstrates the ability to produce polymorphic crystallization forms by controlling the droplet size and temperature. In our process, the saturated solution automatically fills the microwell array powered by degassed bulk PDMS. A critical oil covering step is then introduced to isolate the saturated solution and control the water dissolution rate. Utilizing surface tension, the solution is uniformly packaged in the form of thousands of isolating droplets at the bottom of each microwell of 50-300 μm diameter. After water dissolution, individual crystal structures are automatically formed inside the microwell array. This approach facilitates the study of different glycine growth processes: α-form generated inside the droplets and γ-form generated at the edge of the droplets. With precise temperature control over nanoliter-sized droplets, the growth of ellipsoidal crystalline agglomerates of glycine was achieved for the first time. Optical and SEM images illustrate that the ellipsoidal agglomerates consist of 2-5 μm glycine clusters with inner spiral structures of ~35 μm screw pitch. Lastly, the size distribution of spherical crystalline agglomerates (SAs) produced from microwells of different sizes was measured to have a coefficient variation (CV) of less than 5%, showing crystal sizes can be precisely controlled by microwell sizes with high uniformity. This new method can be used to reliably fabricate monodispersed crystals for pharmaceutical applications.
Single cardiac ventricular myosins are autonomous motors
Wang, Yihua; Yuan, Chen-Ching; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta
2018-01-01
Myosin transduces ATP free energy into mechanical work in muscle. Cardiac muscle has dynamically wide-ranging power demands on the motor as the muscle changes modes in a heartbeat from relaxation, via auxotonic shortening, to isometric contraction. The cardiac power output modulation mechanism is explored in vitro by assessing single cardiac myosin step-size selection versus load. Transgenic mice express human ventricular essential light chain (ELC) in wild- type (WT), or hypertrophic cardiomyopathy-linked mutant forms, A57G or E143K, in a background of mouse α-cardiac myosin heavy chain. Ensemble motility and single myosin mechanical characteristics are consistent with an A57G that impairs ELC N-terminus actin binding and an E143K that impairs lever-arm stability, while both species down-shift average step-size with increasing load. Cardiac myosin in vivo down-shifts velocity/force ratio with increasing load by changed unitary step-size selections. Here, the loaded in vitro single myosin assay indicates quantitative complementarity with the in vivo mechanism. Both have two embedded regulatory transitions, one inhibiting ADP release and a second novel mechanism inhibiting actin detachment via strain on the actin-bound ELC N-terminus. Competing regulators filter unitary step-size selection to control force-velocity modulation without myosin integration into muscle. Cardiac myosin is muscle in a molecule. PMID:29669825
Effect of Microstructure on Time Dependent Fatigue Crack Growth Behavior In a P/M Turbine Disk Alloy
NASA Technical Reports Server (NTRS)
Telesman, Ignacy J.; Gabb, T. P.; Bonacuse, P.; Gayda, J.
2008-01-01
A study was conducted to determine the processes which govern hold time crack growth behavior in the LSHR disk P/M superalloy. Nineteen different heat treatments of this alloy were evaluated by systematically controlling the cooling rate from the supersolvus solutioning step and applying various single and double step aging treatments. The resulting hold time crack growth rates varied by more than two orders of magnitude. It was shown that the associated stress relaxation behavior for these heat treatments was closely correlated with the crack growth behavior. As stress relaxation increased, the hold time crack growth resistance was also increased. The size of the tertiary gamma' in the general microstructure was found to be the key microstructural variable controlling both the hold time crack growth behavior and stress relaxation. No relationship between the presence of grain boundary M23C6 carbides and hold time crack growth was identified which further brings into question the importance of the grain boundary phases in determining hold time crack growth behavior. The linear elastic fracture mechanics parameter, Kmax, is unable to account for visco-plastic redistribution of the crack tip stress field during hold times and thus is inadequate for correlating time dependent crack growth data. A novel methodology was developed which captures the intrinsic crack driving force and was able to collapse hold time crack growth data onto a single curve.
Quantifying in-stream nitrate reaction rates using continuously-collected water quality data
Matthew Miller; Anthony Tesoriero; Paul Capel
2016-01-01
High frequency in situ nitrate data from three streams of varying hydrologic condition, land use, and watershed size were used to quantify the mass loading of nitrate to streams from two sources â groundwater discharge and event flow â at a daily time step for one year. These estimated loadings were used to quantify temporally-variable in-stream nitrate processing ...
Social Media: More Than Just a Communications Medium
2012-03-14
video-hosting web services with the recognition that “Internet-based capabilities are integral to operations across the Department of Defense.”10...as DoD and the government as a whole, the U.S. Army’s recognition of social media’s unique relationship to time and speed is a step forward toward...populated size of social media entities, Alexa , the leader in free global web analytics, provides an updated list of the top 500 websites on the Internet
Development of 3D Oxide Fuel Mechanics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, B. W.; Casagranda, A.; Pitts, S. A.
This report documents recent work to improve the accuracy and robustness of the mechanical constitutive models used in the BISON fuel performance code. These developments include migration of the fuel mechanics models to be based on the MOOSE Tensor Mechanics module, improving the robustness of the smeared cracking model, implementing a capability to limit the time step size based on material model response, and improving the robustness of the return mapping iterations used in creep and plasticity models.
Intermediate surface structure between step bunching and step flow in SrRuO3 thin film growth
NASA Astrophysics Data System (ADS)
Bertino, Giulia; Gura, Anna; Dawber, Matthew
We performed a systematic study of SrRuO3 thin films grown on TiO2 terminated SrTiO3 substrates using off-axis magnetron sputtering. We investigated the step bunching formation and the evolution of the SRO film morphology by varying the step size of the substrate, the growth temperature and the film thickness. The thin films were characterized using Atomic Force Microscopy and X-Ray Diffraction. We identified single and multiple step bunching and step flow growth regimes as a function of the growth parameters. Also, we clearly observe a stronger influence of the step size of the substrate on the evolution of the SRO film surface with respect to the other growth parameters. Remarkably, we observe the formation of a smooth, regular and uniform ``fish skin'' structure at the transition between one regime and another. We believe that the fish skin structure results from the merging of 2D flat islands predicted by previous models. The direct observation of this transition structure allows us to better understand how and when step bunching develops in the growth of SrRuO3 thin films.
FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.
Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid
2014-01-01
A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.
Wester, T; Borg, H; Naji, H; Stenström, P; Westbacke, G; Lilja, H E
2014-09-01
Serial transverse enteroplasty (STEP) was first described in 2003 as a method for lengthening and tapering of the bowel in short bowel syndrome. The aim of this multicentre study was to review the outcome of a Swedish cohort of children who underwent STEP. All children who had a STEP procedure at one of the four centres of paediatric surgery in Sweden between September 2005 and January 2013 were included in this observational cohort study. Demographic details, and data from the time of STEP and at follow-up were collected from the case records and analysed. Twelve patients had a total of 16 STEP procedures; four children underwent a second STEP. The first STEP was performed at a median age of 5·8 (range 0·9-19·0) months. There was no death at a median follow-up of 37·2 (range 3·0-87·5) months and no child had small bowel transplantation. Seven of the 12 children were weaned from parenteral nutrition at a median of 19·5 (range 2·3-42·9) months after STEP. STEP is a useful procedure for selected patients with short bowel syndrome and seems to facilitate weaning from parenteral nutrition. At mid-term follow-up a majority of the children had achieved enteral autonomy. The study is limited by the small sample size and lack of a control group. © 2014 The Authors. BJS published by John Wiley & Sons Ltd on behalf of BJS Society Ltd.
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...
2017-06-09
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less
NASA Astrophysics Data System (ADS)
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya
2017-10-01
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.
In situ formation deposited ZnO nanoparticles on silk fabrics under ultrasound irradiation.
Khanjani, Somayeh; Morsali, Ali; Joo, Sang W
2013-03-01
Deposition of zinc(II) oxide (ZnO) nanoparticles on the surface of silk fabrics was prepared by sequential dipping steps in alternating bath of potassium hydroxide and zinc nitrate under ultrasound irradiation. This coating involves in situ generation and deposition of ZnO in a one step. The effects of ultrasound irradiation, concentration and sequential dipping steps on growth of the ZnO nanoparticles have been studied. Results show a decrease in the particles size as increasing power of ultrasound irradiation. Also, increasing of the concentration and sequential dipping steps increase particle size. The physicochemical properties of the nanoparticles were determined by powder X-ray diffraction (XRD), scanning electron microscopy (SEM) and wavelength dispersive X-ray (WDX). Copyright © 2012 Elsevier B.V. All rights reserved.
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
2017-09-17
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
Unique characteristics of motor adaptation during walking in young children.
Musselman, Kristin E; Patrick, Susan K; Vasudevan, Erin V L; Bastian, Amy J; Yang, Jaynie F
2011-05-01
Children show precocious ability in the learning of languages; is this the case with motor learning? We used split-belt walking to probe motor adaptation (a form of motor learning) in children. Data from 27 children (ages 8-36 mo) were compared with those from 10 adults. Children walked with the treadmill belts at the same speed (tied belt), followed by walking with the belts moving at different speeds (split belt) for 8-10 min, followed again by tied-belt walking (postsplit). Initial asymmetries in temporal coordination (i.e., double support time) induced by split-belt walking were slowly reduced, with most children showing an aftereffect (i.e., asymmetry in the opposite direction to the initial) in the early postsplit period, indicative of learning. In contrast, asymmetries in spatial coordination (i.e., center of oscillation) persisted during split-belt walking and no aftereffect was seen. Step length, a measure of both spatial and temporal coordination, showed intermediate effects. The time course of learning in double support and step length was slower in children than in adults. Moreover, there was a significant negative correlation between the size of the initial asymmetry during early split-belt walking (called error) and the aftereffect for step length. Hence, children may have more difficulty learning when the errors are large. The findings further suggest that the mechanisms controlling temporal and spatial adaptation are different and mature at different times.
Kasza, J; Hemming, K; Hooper, R; Matthews, Jns; Forbes, A B
2017-01-01
Stepped wedge and cluster randomised crossover trials are examples of cluster randomised designs conducted over multiple time periods that are being used with increasing frequency in health research. Recent systematic reviews of both of these designs indicate that the within-cluster correlation is typically taken account of in the analysis of data using a random intercept mixed model, implying a constant correlation between any two individuals in the same cluster no matter how far apart in time they are measured: within-period and between-period intra-cluster correlations are assumed to be identical. Recently proposed extensions allow the within- and between-period intra-cluster correlations to differ, although these methods require that all between-period intra-cluster correlations are identical, which may not be appropriate in all situations. Motivated by a proposed intensive care cluster randomised trial, we propose an alternative correlation structure for repeated cross-sectional multiple-period cluster randomised trials in which the between-period intra-cluster correlation is allowed to decay depending on the distance between measurements. We present results for the variance of treatment effect estimators for varying amounts of decay, investigating the consequences of the variation in decay on sample size planning for stepped wedge, cluster crossover and multiple-period parallel-arm cluster randomised trials. We also investigate the impact of assuming constant between-period intra-cluster correlations instead of decaying between-period intra-cluster correlations. Our results indicate that in certain design configurations, including the one corresponding to the proposed trial, a correlation decay can have an important impact on variances of treatment effect estimators, and hence on sample size and power. An R Shiny app allows readers to interactively explore the impact of correlation decay.
Steps in the open space planning process
Stephanie B. Kelly; Melissa M. Ryan
1995-01-01
This paper presents the steps involved in developing an open space plan. The steps are generic in that the methods may be applied various size communities. The intent is to provide a framework to develop an open space plan that meets Massachusetts requirements for funding of open space acquisition.
Variable-mesh method of solving differential equations
NASA Technical Reports Server (NTRS)
Van Wyk, R.
1969-01-01
Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.
A simple, compact, and rigid piezoelectric step motor with large step size.
Wang, Qi; Lu, Qingyou
2009-08-01
We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.
A simple, compact, and rigid piezoelectric step motor with large step size
NASA Astrophysics Data System (ADS)
Wang, Qi; Lu, Qingyou
2009-08-01
We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.
An improved affine projection algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo
2017-08-01
Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.
Exciton size and binding energy limitations in one-dimensional organic materials.
Kraner, S; Scholz, R; Plasser, F; Koerner, C; Leo, K
2015-12-28
In current organic photovoltaic devices, the loss in energy caused by the charge transfer step necessary for exciton dissociation leads to a low open circuit voltage, being one of the main reasons for rather low power conversion efficiencies. A possible approach to avoid these losses is to tune the exciton binding energy to a value of the order of thermal energy, which would lead to free charges upon absorption of a photon, and therefore increase the power conversion efficiency towards the Shockley-Queisser limit. We determine the size of the excitons for different organic molecules and polymers by time dependent density functional theory calculations. For optically relevant transitions, the exciton size saturates around 0.7 nm for one-dimensional molecules with a size longer than about 4 nm. For the ladder-type polymer poly(benzimidazobenzophenanthroline), we obtain an exciton binding energy of about 0.3 eV, serving as a lower limit of the exciton binding energy for the organic materials investigated. Furthermore, we show that charge transfer transitions increase the exciton size and thus identify possible routes towards a further decrease of the exciton binding energy.
Role of initial correlation in coarsening of a ferromagnet
NASA Astrophysics Data System (ADS)
Chakraborty, Saikat; Das, Subir K.
2015-06-01
We study the dynamics of ordering in ferromagnets via Monte Carlo simulations of the Ising model, employing the Glauber spin-flip mechanism, in space dimensions d = 2 and 3, on square and simple cubic lattices. Results for the persistence probability and the domain growth are discussed for quenches to various temperatures (Tf) below the critical one (Tc), from different initial temperatures Ti ≥ Tc. In long time limit, for Ti>Tc, the persistence probability exhibits power-law decay with exponents θ ≃ 0.22 and ≃ 0.18 in d = 2 and 3, respectively. For finite Ti, the early time behavior is a different power-law whose life-time diverges and exponent decreases as Ti → Tc. The two steps are connected via power-law as a function of domain length and the crossover to the second step occurs when this characteristic length exceeds the equilibrium correlation length at T = Ti. Ti = Tc is expected to provide a new universality class for which we obtain θ ≡ θc ≃ 0.035 in d = 2 and ≃0.105 in d = 3. The time dependence of the average domain size ℓ, however, is observed to be rather insensitive to the choice of Ti.
Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora
NASA Astrophysics Data System (ADS)
Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke
The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.
Accuracy of an unstructured-grid upwind-Euler algorithm for the ONERA M6 wing
NASA Technical Reports Server (NTRS)
Batina, John T.
1991-01-01
Improved algorithms for the solution of the three-dimensional, time-dependent Euler equations are presented for aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured-grid flow solvers. The spatial discretization involves a flux-split approach that is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves either an explicit time-integration scheme using a multistage Runge-Kutta procedure or an implicit time-integration scheme using a Gauss-Seidel relaxation procedure, which is computationally efficient for either steady or unsteady flow problems. With the implicit Gauss-Seidel procedure, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady flow results are presented for both the NACA 0012 airfoil and the Office National d'Etudes et de Recherches Aerospatiales M6 wing to demonstrate applications of the new Euler solvers. The paper presents a description of the Euler solvers along with results and comparisons that assess the capability.
Dynamics of upper mantle rocks decompression melting above hot spots under continental plates
NASA Astrophysics Data System (ADS)
Perepechko, Yury; Sorokin, Konstantin; Sharapov, Victor
2014-05-01
Numeric 2D simulation of the decompression melting above the hot spots (HS) was accomplished under the following conditions: initial temperature within crust mantle section was postulated; thickness of the metasomatized lithospheric mantle is determined by the mantle rheology and position of upper asthenosphere boundary; upper and lower boundaries were postulated to be not permeable and the condition for adhesion and the distribution of temperature (1400-2050°C); lateral boundaries imitated infinity of layer. Sizes and distribution of lateral points, their symmetry, and maximum temperature varied between the thermodynamic condition for existences of perovskite - majorite transition and its excess above transition temperature. Problem was solved numerically a cell-vertex finite volume method for thermo hydrodynamic problems. For increasing convergence of iterative process the method of lower relaxation with different value of relaxation parameter for each equation was used. The method of through calculation was used for the increase in the computing rate for the two-layered upper mantle - lithosphere system. Calculated region was selected as 700 x (2100-4900) km. The time step for the study of the asthenosphere dynamics composed 0.15-0.65 Ma. The following factors controlling the sizes and melting degree of the convective upper mantle, are shown: a) the initial temperature distribution along the section of upper mantleb) sizes and the symmetry of HS, c) temperature excess within the HS above the temperature on the upper and lower mantle border TB=1500-2000oC with 5-15% deviation but not exceed 2350oC. It is found, that appearance of decompression melting with HS presence initiate primitive mantle melting at TB > of 1600oC. Initial upper mantle heating influence on asthenolens dimensions with a constant HS size is controlled mainly by decompression melting degree. Thus, with lateral sizes of HS = 400 km the decompression melting appears at TB > 1600oC and HS temperature (THS) > 1900oC asthenolens size ~700 km. When THS = of 2000oC the maximum melting degree of the primitive mantle is near 40%. An increase in the TB > 1900oC the maximum degree of melting could rich 100% with the same size of decompression melting zone (700 km). We examined decompression melting above the HS having LHS = 100 km - 780 km at a TB 1850- 2100oC with the thickness of lithosphere = 100 km.It is shown that asthenolens size (Lln) does not change substantially: Lln=700 km at LHS = of 100 km; Lln= 800 km at LHS = of 780 km. In presence of asymmetry of large HS the region of advection is developed above the HS maximum with the formation of asymmetrical cell. Influence of lithospheric plate thicknesses on appearance and evolution of asthenolens above the HS were investigated for the model stepped profile for the TB ≤ of 1750oS with Lhs = 100km and maximum of THS =2350oC. With an increase of TB the Lln difference beneath lithospheric steps is leveled with retention of a certain difference to melting degrees and time of the melting appearance a top of the HS. RFBR grant 12-05-00625.
NASA Astrophysics Data System (ADS)
Liu, Di; Mishra, Ashok K.; Yu, Zhongbo
2016-07-01
This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).
An atomistic simulation scheme for modeling crystal formation from solution.
Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk
2006-01-14
We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.
Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions
NASA Astrophysics Data System (ADS)
Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.
2016-09-01
Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.
Test systems of the STS-XYTER2 ASIC: from wafer-level to in-system verification
NASA Astrophysics Data System (ADS)
Kasinski, Krzysztof; Zubrzycka, Weronika
2016-09-01
The STS/MUCH-XYTER2 ASIC is a full-size prototype chip for the Silicon Tracking System (STS) and Muon Chamber (MUCH) detectors in the new fixed-target experiment Compressed Baryonic Matter (CBM) at FAIR-center, Darmstadt, Germany. The STS assembly includes more than 14000 ASICs. The complicated, time-consuming, multi-step assembly process of the detector building blocks and tight quality assurance requirements impose several intermediate testing to be performed for verifying crucial assembly steps (e.g. custom microcable tab-bonding before wire-bonding to the PCB) and - if necessary - identifying channels or modules for rework. The chip supports the multi-level testing with different probing / contact methods (wafer probe-card, pogo-probes, in-system tests). A huge number of ASICs to be tested restricts the number and kind of tests possible to be performed within a reasonable time. The proposed architectures of test stand equipment and a brief summary of methodologies are presented in this paper.
Chemically etched ultrahigh-Q wedge-resonator on a silicon chip
NASA Astrophysics Data System (ADS)
Lee, Hansuek; Chen, Tong; Li, Jiang; Yang, Ki Youl; Jeon, Seokmin; Painter, Oskar; Vahala, Kerry J.
2012-06-01
Ultrahigh-Q optical resonators are being studied across a wide range of fields, including quantum information, nonlinear optics, cavity optomechanics and telecommunications. Here, we demonstrate a new resonator with a record Q-factor of 875 million for on-chip devices. The fabrication of our device avoids the requirement for a specialized processing step, which in microtoroid resonators has made it difficult to control their size and achieve millimetre- and centimetre-scale diameters. Attaining these sizes is important in applications such as microcombs and potentially also in rotation sensing. As an application of size control, stimulated Brillouin lasers incorporating our device are demonstrated. The resonators not only set a new benchmark for the Q-factor on a chip, but also provide, for the first time, full compatibility of this important device class with conventional semiconductor processing. This feature will greatly expand the range of possible `system on a chip' functions enabled by ultrahigh-Q devices.
NASA Astrophysics Data System (ADS)
Hartman, John; Kirby, Brian
2017-03-01
Nanoparticle tracking analysis, a multiprobe single particle tracking technique, is a widely used method to quickly determine the concentration and size distribution of colloidal particle suspensions. Many popular tools remove non-Brownian components of particle motion by subtracting the ensemble-average displacement at each time step, which is termed dedrifting. Though critical for accurate size measurements, dedrifting is shown here to introduce significant biasing error and can fundamentally limit the dynamic range of particle size that can be measured for dilute heterogeneous suspensions such as biological extracellular vesicles. We report a more accurate estimate of particle mean-square displacement, which we call decorrelation analysis, that accounts for correlations between individual and ensemble particle motion, which are spuriously introduced by dedrifting. Particle tracking simulation and experimental results show that this approach more accurately determines particle diameters for low-concentration polydisperse suspensions when compared with standard dedrifting techniques.
ERIC Educational Resources Information Center
ROSEN, ELLEN F.; STOLUROW, LAWRENCE M.
IN ORDER TO FIND A GOOD PREDICTOR OF EMPIRICAL DIFFICULTY, AN OPERATIONAL DEFINITION OF STEP SIZE, TEN PROGRAMER-JUDGES RATED CHANGE IN COMPLEXITY IN TWO VERSIONS OF A MATHEMATICS PROGRAM, AND THESE RATINGS WERE THEN COMPARED WITH MEASURES OF EMPIRICAL DIFFICULTY OBTAINED FROM STUDENT RESPONSE DATA. THE TWO VERSIONS, A 54 FRAME BOOKLET AND A 35…
Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6
NASA Astrophysics Data System (ADS)
Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.
2017-01-01
This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.
Predict the fatigue life of crack based on extended finite element method and SVR
NASA Astrophysics Data System (ADS)
Song, Weizhen; Jiang, Zhansi; Jiang, Hui
2018-05-01
Using extended finite element method (XFEM) and support vector regression (SVR) to predict the fatigue life of plate crack. Firstly, the XFEM is employed to calculate the stress intensity factors (SIFs) with given crack sizes. Then predicetion model can be built based on the function relationship of the SIFs with the fatigue life or crack length. Finally, according to the prediction model predict the SIFs at different crack sizes or different cycles. Because of the accuracy of the forward Euler method only ensured by the small step size, a new prediction method is presented to resolve the issue. The numerical examples were studied to demonstrate the proposed method allow a larger step size and have a high accuracy.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
Huynh, Alexis K; Lee, Martin L; Farmer, Melissa M; Rubenstein, Lisa V
2016-10-21
Stepped wedge designs have gained recognition as a method for rigorously assessing implementation of evidence-based quality improvement interventions (QIIs) across multiple healthcare sites. In theory, this design uses random assignment of sites to successive QII implementation start dates based on a timeline determined by evaluators. However, in practice, QII timing is often controlled more by site readiness. We propose an alternate version of the stepped wedge design that does not assume the randomized timing of implementation while retaining the method's analytic advantages and applying to a broader set of evaluations. To test the feasibility of a nonrandomized stepped wedge design, we developed simulated data on patient care experiences and on QII implementation that had the structures and features of the expected data from a planned QII. We then applied the design in anticipation of performing an actual QII evaluation. We used simulated data on 108,000 patients to model nonrandomized stepped wedge results from QII implementation across nine primary care sites over 12 quarters. The outcome we simulated was change in a single self-administered question on access to care used by Veterans Health Administration (VA), based in the United States, as part of its quarterly patient ratings of quality of care. Our main predictors were QII exposure and time. Based on study hypotheses, we assigned values of 4 to 11 % for improvement in access when sites were first exposed to implementation and 1 to 3 % improvement in each ensuing time period thereafter when sites continued with implementation. We included site-level (practice size) and respondent-level (gender, race/ethnicity) characteristics that might account for nonrandomized timing in site implementation of the QII. We analyzed the resulting data as a repeated cross-sectional model using HLM 7 with a three-level hierarchical data structure and an ordinal outcome. Levels in the data structure included patient ratings, timing of adoption of the QII, and primary care site. We were able to demonstrate a statistically significant improvement in adoption of the QII, as postulated in our simulation. The linear time trend while sites were in the control state was not significant, also as expected in the real life scenario of the example QII. We concluded that the nonrandomized stepped wedge design was feasible within the parameters of our planned QII with its data structure and content. Our statistical approach may be applicable to similar evaluations.
Continuous-Flow In-Line Solvent-Swap Crystallization of Vitamin D3
2017-01-01
A continuous tandem in-line evaporation–crystallization is presented. The process includes an in-line solvent-swap step, suitable to be coupled to a capillary based cooler. As a proof of concept, this setup is tested in a direct in-line acetonitrile mediated crystallization of Vitamin D3. This configuration is suitable to be coupled to a new end-to-end continuous microflow synthesis of Vitamin D3. By this procedure, vitamin particles can be crystallized in continuous flow and isolated using an in-line continuous filtration step. In one run in just 1 min of cooling time, ∼50% (w/w) crystals of Vitamin D3 are directly obtained. Furthermore, the polymorphic form as well as crystals shape and size properties are described in this paper.
Experimental studies of systematic multiple-energy operation at HIMAC synchrotron
NASA Astrophysics Data System (ADS)
Mizushima, K.; Katagiri, K.; Iwata, Y.; Furukawa, T.; Fujimoto, T.; Sato, S.; Hara, Y.; Shirai, T.; Noda, K.
2014-07-01
Multiple-energy synchrotron operation providing carbon-ion beams with various energies has been used for scanned particle therapy at NIRS. An energy range from 430 to 56 MeV/u and about 200 steps within this range are required to vary the Bragg peak position for effective treatment. The treatment also demands the slow extraction of beam with highly reliable properties, such as spill, position and size, for all energies. We propose an approach to generating multiple-energy operation meeting these requirements within a short time. In this approach, the device settings at most energy steps are determined without manual adjustments by using systematic parameter tuning depending on the beam energy. Experimental verification was carried out at the HIMAC synchrotron, and its results proved that this approach can greatly reduce the adjustment period.
Bitwise efficiency in chaotic models
Düben, Peter; Palmer, Tim
2017-01-01
Motivated by the increasing energy consumption of supercomputing for weather and climate simulations, we introduce a framework for investigating the bit-level information efficiency of chaotic models. In comparison with previous explorations of inexactness in climate modelling, the proposed and tested information metric has three specific advantages: (i) it requires only a single high-precision time series; (ii) information does not grow indefinitely for decreasing time step; and (iii) information is more sensitive to the dynamics and uncertainties of the model rather than to the implementation details. We demonstrate the notion of bit-level information efficiency in two of Edward Lorenz’s prototypical chaotic models: Lorenz 1963 (L63) and Lorenz 1996 (L96). Although L63 is typically integrated in 64-bit ‘double’ floating point precision, we show that only 16 bits have significant information content, given an initial condition uncertainty of approximately 1% of the size of the attractor. This result is sensitive to the size of the uncertainty but not to the time step of the model. We then apply the metric to the L96 model and find that a 16-bit scaled integer model would suffice given the uncertainty of the unresolved sub-grid-scale dynamics. We then show that, by dedicating computational resources to spatial resolution rather than numeric precision in a field programmable gate array (FPGA), we see up to 28.6% improvement in forecast accuracy, an approximately fivefold reduction in the number of logical computing elements required and an approximately 10-fold reduction in energy consumed by the FPGA, for the L96 model. PMID:28989303
Bitwise efficiency in chaotic models
NASA Astrophysics Data System (ADS)
Jeffress, Stephen; Düben, Peter; Palmer, Tim
2017-09-01
Motivated by the increasing energy consumption of supercomputing for weather and climate simulations, we introduce a framework for investigating the bit-level information efficiency of chaotic models. In comparison with previous explorations of inexactness in climate modelling, the proposed and tested information metric has three specific advantages: (i) it requires only a single high-precision time series; (ii) information does not grow indefinitely for decreasing time step; and (iii) information is more sensitive to the dynamics and uncertainties of the model rather than to the implementation details. We demonstrate the notion of bit-level information efficiency in two of Edward Lorenz's prototypical chaotic models: Lorenz 1963 (L63) and Lorenz 1996 (L96). Although L63 is typically integrated in 64-bit `double' floating point precision, we show that only 16 bits have significant information content, given an initial condition uncertainty of approximately 1% of the size of the attractor. This result is sensitive to the size of the uncertainty but not to the time step of the model. We then apply the metric to the L96 model and find that a 16-bit scaled integer model would suffice given the uncertainty of the unresolved sub-grid-scale dynamics. We then show that, by dedicating computational resources to spatial resolution rather than numeric precision in a field programmable gate array (FPGA), we see up to 28.6% improvement in forecast accuracy, an approximately fivefold reduction in the number of logical computing elements required and an approximately 10-fold reduction in energy consumed by the FPGA, for the L96 model.
The potential role of real-time geodetic observations in tsunami early warning
NASA Astrophysics Data System (ADS)
Tinti, Stefano; Armigliato, Alberto
2016-04-01
Tsunami warning systems (TWS) have the final goal to launch a reliable alert of an incoming dangerous tsunami to coastal population early enough to allow people to flee from the shore and coastal areas according to some evacuation plans. In the last decade, especially after the catastrophic 2004 Boxing Day tsunami in the Indian Ocean, much attention has been given to filling gaps in the existing TWSs (only covering the Pacific Ocean at that time) and to establishing new TWSs in ocean regions that were uncovered. Typically, TWSs operating today work only on earthquake-induced tsunamis. TWSs estimate quickly earthquake location and size by real-time processing seismic signals; on the basis of some pre-defined "static" procedures (either based on decision matrices or on pre-archived tsunami simulations), assess the tsunami alert level on a large regional scale and issue specific bulletins to a pre-selected recipients audience. Not unfrequently these procedures result in generic alert messages with little value. What usually operative TWSs do not do, is to compute earthquake focal mechanism, to calculate the co-seismic sea-floor displacement, to assess the initial tsunami conditions, to input these data into tsunami simulation models and to compute tsunami propagation up to the threatened coastal districts. This series of steps is considered nowadays too time consuming to provide the required timely alert. An equivalent series of steps could start from the same premises (earthquake focal parameters) and reach the same result (tsunami height at target coastal areas) by replacing the intermediate steps of real-time tsunami simulations with proper selection from a large archive of pre-computed tsunami scenarios. The advantage of real-time simulations and of archived scenarios selection is that estimates are tailored to the specific occurring tsunami and alert can be more detailed (less generic) and appropriate for local needs. Both these procedures are still at an experimental or testing stage and haven't been implemented yet in any standard TWS operations. Nonetheless, this is seen to be the future and the natural TWS evolving enhancement. In this context, improvement of the real-time estimates of tsunamigenic earthquake focal mechanism is of fundamental importance to trigger the appropriate computational chain. Quick discrimination between strike-slip and thrust-fault earthquakes, and equally relevant, quick assessment of co-seismic on-fault slip distribution, are exemplary cases to which a real-time geodetic monitoring system can contribute significantly. Robust inversion of geodetic data can help to reconstruct the sea floor deformation pattern especially if two conditions are met: the source is not too far from network stations and is well covered azimuthally. These two conditions are sometimes hard to satisfy fully, but in certain regions, like the Mediterranean and the Caribbean sea, this is quite possible due to the limited size of the ocean basins. Close cooperation between the Global Geodetic Observing System (GGOS) community, seismologists, tsunami scientists and TWS operators is highly recommended to obtain significant progresses in the quick determination of the earthquake source, which can trigger a timely estimation of the ensuing tsunami and a more reliable and detailed assessment of the tsunami size at the coast.
Kajiwara, Kenji; Yamagami, Takuji; Ishikawa, Masaki; Yoshimatsu, Rika; Baba, Yasutaka; Nakamura, Yuko; Fukumoto, Wataru; Awai, Kazuo
2017-06-01
To evaluate the one step technique compared with the Seldinger technique in computed tomography (CT) fluoroscopy-guided percutaneous drainage of abdominal and pelvic abscess. Seventy-six consecutive patients (49 men, 27 women; mean age 63.5 years, range 19-87 years) with abdominal and pelvic abscess were included in this study. Drainages were performed with the one step (n = 46) and with the Seldinger (n = 48) technique between September 2012 and June 2014. The technical success and clinical success rates were 95.8% and 93.5%, respectively, for the one step group, and 97.8% and 95.7%, respectively, for the Seldinger group. The mean procedure time was significantly shorter with the one step than with the Seldinger method (15.0 ± 4.3 min, range 10-29 min vs. 21.0 ± 9.5 min, range 13-54 min, p < .01). The mean abscess size and depth were 73.4 ± 44.0 mm and 42.5 ± 19.3 mm, respectively, in the one step group, and 61.0 ± 22.8 mm and 35.0 ± 20.7 mm in the Seldinger group. The one step technique was easier and faster than the Seldinger technique. The effectiveness of both techniques was similar for the CT fluoroscopy-guided percutaneous drainage of abdominal and pelvic abscess.
Scanning near-field optical microscopy.
Vobornik, Dusan; Vobornik, Slavenka
2008-02-01
An average human eye can see details down to 0,07 mm in size. The ability to see smaller details of the matter is correlated with the development of the science and the comprehension of the nature. Today's science needs eyes for the nano-world. Examples are easily found in biology and medical sciences. There is a great need to determine shape, size, chemical composition, molecular structure and dynamic properties of nano-structures. To do this, microscopes with high spatial, spectral and temporal resolution are required. Scanning Near-field Optical Microscopy (SNOM) is a new step in the evolution of microscopy. The conventional, lens-based microscopes have their resolution limited by diffraction. SNOM is not subject to this limitation and can offer up to 70 times better resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetzel, D.; Shi, Y; Reffner, J
This reports the first detection of chemical heterogeneity in octenyl succinic anhydride modified single starch granules using a Fourier transform infrared (FT-IR) microspectroscopical technique that combines diffraction-limited infrared microspectroscopy with a step size that is less than the mask projected spot size focused on the plane of the sample. The high spatial resolution was achieved with the combination of the application of a synchrotron infrared source and the confocal image plane masking system of the double-pass single-mask Continuum{reg_sign} infrared microscope. Starch from grains such as corn and wheat exists in granules. The size of the granules depends on the plantmore » producing the starch. Granules used in this study typically had a median size of 15 {micro}m. In the production of modified starch, an acid anhydride typically is reacted with OH groups of the starch polymer. The resulting esterification adds the ester carbonyl (1723 cm{sup -1}) organic functional group to the polymer and the hydrocarbon chain of the ester contributes to the CH{sub 2} stretching vibration to enhance the intensity of the 2927 cm{sup -1} band. Detection of the relative modifying population on a single granule was accomplished by ratioing the baseline adjusted peak area of the carbonyl functional group to that of a carbohydrate band. By stepping a confocally defined infrared beam as small as 5 {micro}m x 5 {micro}m across a starch granule 1 {micro}m at a time in both the x and y directions, the heterogeneity is detected with the highest possible spatial resolution.« less
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
West, A. C.; Novakowski, K. S.
2005-12-01
Regional groundwater flow models are rife with uncertainty. The three-dimensional flux vector fields must generally be inferred using inverse modelling from sparse measurements of hydraulic head, from measurements of hydraulic parameters at a scale that is miniscule in comparison to that of the domain, and from none to a very few measurements of recharge or discharge rate. Despite the inherent uncertainty in these models they are routinely used to delineate steady-state or time-of-travel capture zones for the purpose of wellhead protection. The latter are defined as the volume of the aquifer within which released particles will arrive at the well within the specified time and their delineation requires the additional step of dividing the magnitudes of the flux vectors by the assumed porosity to arrive at the ``average linear groundwater velocity'' vector field. Since the porosity is usually assumed constant over the domain one could be forgiven for thinking that the uncertainty introduced at this step is minor in comparison to the flow model calibration step. We consider this question when the porosity in question is fracture porosity in flat-lying sedimentary bedrock. We also consider whether or not the diffusive uptake of solute into the rock matrix which lies between the source and the production well reduces or enhances the uncertainty. To evaluate the uncertainty an aquifer cross section is conceptualized as an array of horizontal, randomly-spaced, parallel-plate fractures of random aperture, with adjacent horizontal fractures connected by vertical fractures again of random spacing and aperture. The source is assumed to be a continuous concentration (i.e. a dirichlet boundary condition) representing a leaking tank or a DNAPL pool, and the receptor is a fully pentrating well located in the down-gradient direction. In this context the time-of-travel capture zone is defined as the separation distance required such that the source does not contaminate the well beyond a threshold concentration within the specified time. Aquifers are simulated by drawing the random spacings and apertures from specified distributions. Predictions are made of capture zone size assuming various degrees of knowledge of these distributions, with the parameters of the horizontal fractures being estimated using simulated hydraulic tests and a maximum likelihood estimator. The uncertainty is evaluated by calculating the variance in the capture zone size estimated in multiple realizations. The results show that despite good strategies to estimate the parameters of the horizontal fractures the uncertainty in capture zone size is enormous, mostly due to the lack of available information on vertical fractures. Also, at realistic distances (less than ten kilometers) and using realistic transmissivity distributions for the horizontal fractures the uptake of solute from fractures into matrix cannot be relied upon to protect the production well from contamination.
Method of Lines Transpose an Implicit Vlasov Maxwell Solver for Plasmas
2015-04-17
boundary crossings should be rare. Numerical results for the Bennett pinch are given in Figure 9. In order to resolve large gradients near the center of the...contributing to the large error at the center of the beam due to large gradients there) and with the finite beam cut-off radius and the outflow boundary...usable time step size can be limited by the numerical accuracy of the method when there are large gradients (high-frequency content) in the solution. We
Siboni, Renaud; Joseph, Etienne; Blasco, Laurent; Barbe, Coralie; Bajolet, Odile; Diallo, Saïdou; Ohl, Xavier
2018-06-07
Management of septic non-union of the tibia requires debridement and excision of all infected bone and soft tissues. Various surgical techniques have been described to fill the bone defect. The "Induced Membrane" technique, described by A. C. Masquelet in 1986, is a two-step procedure using a PMMA cement spacer around which an induced membrane develops, to be used in the second step as a bone graft holder for the bone graft. The purpose of this study was to assess our clinical and radiological results with this technique in a series managed in our department. Nineteen traumatic septic non-unions of the tibia were included in a retrospective single-center study between November 2007 and November 2014. All patients were followed up clinically and radiologically to assess bone union time. Multivariate analysis was used to identify factors influencing union. The series comprised 4 women and 14 men (19 legs); mean age was 53.9 years. Vascularized flap transfer was required in 26% of cases before the first stage of treatment. All patients underwent a two-step procedure, with a mean interval of 7.9 weeks. Mean bone defect after the first step was 52.4mm. The bone graft was harvested from the iliac crest in the majority of cases (18/19). The bone was stabilized with an external fixator, locking plate or plaster cast after the second step. Mean follow-up was 34 months. Bony union rate was 89% (17/19), at a mean 16 months after step 2. Eleven patients underwent one or more (mean 2.1) complementary procedures. Severity of index fracture skin opening was significantly correlated with union time (Gustilo III vs. Gustilo I or II, p=0.028). A trend was found for negative impact of smoking on union (p=0.06). Bone defect size did not correlate with union rate or time. The union rate was acceptable, at 89%, but with longer union time than reported in the literature. Many factors could explain this: lack of rigid fixation after step 2 (in case of plaster cast or external fixator), or failure to cease smoking. The results showed that the induced membrane technique is effective in treating tibial septic non-union, but could be improved by stable fixation after the second step and by cessation of smoking. IV, Retrospective study. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Conrad, Philipp; Weber, Wilhelm; Jung, Alexander
2017-04-01
Hydropower plants are indispensable to stabilize the grid by reacting quickly to changes of the energy demand. However, an extension of the operating range towards high and deep part load conditions without fatigue of the hydraulic components is desirable to increase their flexibility. In this paper a model sized Francis turbine at low discharge operating conditions (Q/QBEP = 0.27) is analyzed by means of computational fluid dynamics (CFD). Unsteady two-phase simulations for two Thoma-number conditions are conducted. Stochastic pressure oscillations, observed on the test rig at low discharge, require sophisticated numerical models together with small time steps, large grid sizes and long simulation times to cope with these fluctuations. In this paper the BSL-EARSM model (Explicit Algebraic Reynolds Stress) was applied as a compromise between scale resolving and two-equation turbulence models with respect to computational effort and accuracy. Simulation results are compared to pressure measurements showing reasonable agreement in resolving the frequency spectra and amplitude. Inner blade vortices were predicted successfully in shape and size. Surface streamlines in blade-to-blade view are presented, giving insights to the formation of the inner blade vortices. The acquired time dependent pressure fields can be used for quasi-static structural analysis (FEA) for fatigue calculations in the future.
Stochastic 3D modeling of Ostwald ripening at ultra-high volume fractions of the coarsening phase
NASA Astrophysics Data System (ADS)
Spettl, A.; Wimmer, R.; Werz, T.; Heinze, M.; Odenbach, S.; Krill, C. E., III; Schmidt, V.
2015-09-01
We present a (dynamic) stochastic simulation model for 3D grain morphologies undergoing a grain coarsening phenomenon known as Ostwald ripening. For low volume fractions of the coarsening phase, the classical LSW theory predicts a power-law evolution of the mean particle size and convergence toward self-similarity of the particle size distribution; experiments suggest that this behavior holds also for high volume fractions. In the present work, we have analyzed 3D images that were recorded in situ over time in semisolid Al-Cu alloys manifesting ultra-high volume fractions of the coarsening (solid) phase. Using this information we developed a stochastic simulation model for the 3D morphology of the coarsening grains at arbitrary time steps. Our stochastic model is based on random Laguerre tessellations and is by definition self-similar—i.e. it depends only on the mean particle diameter, which in turn can be estimated at each point in time. For a given mean diameter, the stochastic model requires only three additional scalar parameters, which influence the distribution of particle sizes and their shapes. An evaluation shows that even with this minimal information the stochastic model yields an excellent representation of the statistical properties of the experimental data.
Kraan, Casper; Aarts, Geert; Van der Meer, Jaap; Piersma, Theunis
2010-06-01
Ongoing statistical sophistication allows a shift from describing species' spatial distributions toward statistically disentangling the possible roles of environmental variables in shaping species distributions. Based on a landscape-scale benthic survey in the Dutch Wadden Sea, we show the merits of spatially explicit generalized estimating equations (GEE). The intertidal macrozoobenthic species, Macoma balthica, Cerastoderma edule, Marenzelleria viridis, Scoloplos armiger, Corophium volutator, and Urothoe poseidonis served as test cases, with median grain-size and inundation time as typical environmental explanatory variables. GEEs outperformed spatially naive generalized linear models (GLMs), and removed much residual spatial structure, indicating the importance of median grain-size and inundation time in shaping landscape-scale species distributions in the intertidal. GEE regression coefficients were smaller than those attained with GLM, and GEE standard errors were larger. The best fitting GEE for each species was used to predict species' density in relation to median grain-size and inundation time. Although no drastic changes were noted compared to previous work that described habitat suitability for benthic fauna in the Wadden Sea, our predictions provided more detailed and unbiased estimates of the determinants of species-environment relationships. We conclude that spatial GEEs offer the necessary methodological advances to further steps toward linking pattern to process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milner, Phillip J.; Martell, Jeffrey D.; Siegelman, Rebecca L.
Alkyldiamine-functionalized variants of the metal–organic framework Mg 2(dobpdc) (dobpdc 4- = 4,4'-dioxidobiphenyl-3,3'-dicarboxylate) are promising for CO 2 capture applications owing to their unique step-shaped CO 2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary,secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO 2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO 2 adsorption/desorption profiles. This two-step behaviormore » likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg 2(dobpdc) and leads to decreased CO 2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg 2(dotpdc) (dotpdc 4- = 4,4''-dioxido-[1,1':4',1''-terphenyl]-3,3''-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg 2(pc-dobpdc) (pc-dobpdc 4- = 3,3'-dioxidobiphenyl-4,4'-dicarboxylate, pc = para-carboxylate), which, in contrast to Mg 2(dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO 2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg 2(pc-dobpdc) with large diamines such as N-(n-heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications.« less
3D Numerical Simulation on the Rockslide Generated Tsunamis
NASA Astrophysics Data System (ADS)
Chuang, M.; Wu, T.; Wang, C.; Chu, C.
2013-12-01
The rockslide generated tsunami is one of the most devastating nature hazards. However, the involvement of the moving obstacle and dynamic free-surface movement makes the numerical simulation a difficult task. To describe both the fluid motion and solid movement at the same time, we newly developed a two-way fully-coupled moving solid algorithm with 3D LES turbulent model. The free-surface movement is tracked by volume of fluid (VOF) method. The two-step projection method is adopted to solve the Navier-Stokes type government equations. In the new moving solid algorithm, a fictitious body force is implicitly prescribed in MAC correction step to make the cell-center velocity satisfied with the obstacle velocity. We called this method the implicit velocity method (IVM). Because no extra terms are added to the pressure Poission correction, the pressure field of the fluid part is stable, which is the key of the two-way fluid-solid coupling. Because no real solid material is presented in the IVM, the time marching step is not restricted to the smallest effective grid size. Also, because the fictitious force is implicitly added to the correction step, the resulting velocity is accurate and fully coupled with the resulting pressure field. We validated the IVM by simulating a floating box moving up and down on the free-surface. We presented the time-history obstacle trajectory and compared it with the experimental data. Very accurate result can be seen in terms of the oscillating amplitude and the period (Fig. 1). We also presented the free-surface comparison with the high-speed snapshots. At the end, the IVM was used to study the rock-slide generated tsunamis (Liu et al., 2005). Good validations on the slide trajectory and the free-surface movement will be presented in the full paper. From the simulation results (Fig. 2), we observed that the rockslide generated waves are manly caused by the rebounding waves from two sides of the sliding rock after the water is dragging down by the solid downward motion. We also found that the turbulence has minor effect to the main flow field. The rock size, rock density, and the steepness of the slope were analyzed to understand their effects to the maximum runup height. The detailed algorithm of IVM, the validation, the simulation and analysis of rockslide tsunami will be presented in the full paper. Figure 1. Time-history trajectory of obstacle for the floating obstacle simulation. Figure 2. Snapshots of the free-surface elevation with streamlines for the rockslide tsunami simulation.
Physical pretreatment – woody biomass size reduction – for forest biorefinery
J.Y. Zhu
2011-01-01
Physical pretreatment of woody biomass or wood size reduction is a prerequisite step for further chemical or biochemical processing in forest biorefinery. However, wood size reduction is very energy intensive which differentiates woody biomass from herbaceous biomass for biorefinery. This chapter discusses several critical issues related to wood size reduction: (1)...
Fluid transport properties by equilibrium molecular dynamics. I. Methodology at extreme fluid states
NASA Astrophysics Data System (ADS)
Dysthe, D. K.; Fuchs, A. H.; Rousseau, B.
1999-02-01
The Green-Kubo formalism for evaluating transport coefficients by molecular dynamics has been applied to flexible, multicenter models of linear and branched alkanes in the gas phase and in the liquid phase from ambient conditions to close to the triple point. The effects of integration time step, potential cutoff and system size have been studied and shown to be small compared to the computational precision except for diffusion in gaseous n-butane. The RATTLE algorithm is shown to give accurate transport coefficients for time steps up to a limit of 8 fs. The different relaxation mechanisms in the fluids have been studied and it is shown that the longest relaxation time of the system governs the statistical precision of the results. By measuring the longest relaxation time of a system one can obtain a reliable error estimate from a single trajectory. The accuracy of the Green-Kubo method is shown to be as good as the precision for all states and models used in this study even when the system relaxation time becomes very long. The efficiency of the method is shown to be comparable to nonequilibrium methods. The transport coefficients for two recently proposed potential models are presented, showing deviations from experiment of 0%-66%.
Study on characteristics of printed circuit board liberation and its crushed products.
Quan, Cui; Li, Aimin; Gao, Ningbo
2012-11-01
Recycling printed circuit board waste (PCBW) waste is a hot issue of environmental protection and resource recycling. Mechanical and thermo-chemical methods are two traditional recycling processes for PCBW. In the present research, a two-step crushing process combined with a coarse-crushing step and a fine-pulverizing step was adopted, and then the crushed products were classified into seven different fractions with a standard sieve. The liberation situation and particle shape in different size fractions were observed. Properties of different size fractions, such as heating value, thermogravimetric, proximate, ultimate and chemical analysis were determined. The Rosin-Rammler model was applied to analyze the particle size distribution of crushed material. The results indicated that complete liberation of metals from the PCBW was achieved at a size less than 0.59 mm, but the nonmetal particle in the smaller-than-0.15 mm fraction is liable to aggregate. Copper was the most prominent metal in PCBW and mainly enriched in the 0.42-0.25 mm particle size. The Rosin-Rammler equation adequately fit particle size distribution data of crushed PCBW with a correlation coefficient of 0.9810. The results of heating value and proximate analysis revealed that the PCBW had a low heating value and high ash content. The combustion and pyrolysis process of PCBW was different and there was an obvious oxidation peak of Cu in combustion runs.
Rostgaard Eltzholtz, Jakob; Tyrsted, Christoffer; Ørnsbjerg Jensen, Kirsten Marie; Bremholm, Martin; Christensen, Mogens; Becker-Christensen, Jacob; Brummerstedt Iversen, Bo
2013-03-21
A new step in supercritical nanoparticle synthesis, the pulsed supercritical synthesis reactor, is investigated in situ using synchrotron powder X-ray diffraction (PXRD) to understand the formation of nanoparticles in real time. This eliminates the common problem of transferring information gained during in situ studies to subsequent laboratory reactor conditions. As a proof of principle, anatase titania nanoparticles were synthesized in a 50/50 mixture of water and isopropanol near and above the critical point of water (P = 250 bar, T = 300, 350, 400, 450, 500 and 550 °C). The evolution of the reaction product was followed by sequentially recording PXRD patterns with a time resolution of less than two seconds. The crystallite size of titania is found to depend on both temperature and residence time, and increasing either parameter leads to larger crystallites. A simple adjustment of either temperature or residence time provides a direct method for gram scale production of anatase nanoparticles of average crystallite sizes between 7 and 35 nm, thus giving the option of synthesizing tailor-made nanoparticles. Modeling of the in situ growth curves using an Avrami growth model gave an activation energy of 66(19) kJ mol(-1) for the initial crystallization. The in situ PXRD data also provide direct information about the size dependent macrostrain in the nanoparticles and with decreasing crystallite size the unit cell contracts, especially along the c-direction. This agrees well with previous ex situ results obtained for hydrothermal synthesis of titania nanoparticles.
Workshop II On Unsteady Separated Flow Proceedings
1988-07-28
was static stall angle of 12 ° . achieved by injecting diluted food coloring at the apex through a 1.5 mm diameter tube placed The response of the wing...differences with uniform step size in q, and trailing -. 75 three- pront differences with uniform step size in ,, ,,as used The nonlinearity of the...flow prop- "Kutta condition." erties for slender 3D wings are addressed. To begin the The present paper emphasizes recent progress in the de- study
NASA Technical Reports Server (NTRS)
Justus, C. G.
1987-01-01
The Global Reference Atmosphere Model (GRAM) is under continuous development and improvement. GRAM data were compared with Middle Atmosphere Program (MAP) predictions and with shuttle data. An important note: Users should employ only step sizes in altitude that give vertical density gradients consistent with shuttle-derived density data. Using too small a vertical step size (finer then 1 km) will result in what appears to be unreasonably high values of density shears but what in reality is noise in the model.
A Computer Model Predicting the Thermal Response to Microwave Radiation
1982-12-01
While each of these represents the result of a triple integration, the total running time is still only between 3 and 4 main on an IBM 360 for K...T/Tp]Tp + (i-1)T p (3.4.9) M(TN) min(([T-[T/Tp] iTp )/T ],Np) (3.4.10) t(TNTT)= MTN) +rTT 1T (3.4.11) t(,ppPT, ’(,p )T L"’PJ-’P x =x(NJ~1~T~ = 0 if...TRM - SRM1)/TRM 101 .: S . .-. . . . . . . . .. . . . . 4.6. Program Size and Running Time The program requires 252K on the ’GO’ step for an IBM 360
Near real time vapor detection and enhancement using aerosol adsorption
Novick, Vincent J.; Johnson, Stanley A.
1999-01-01
A vapor sample detection method where the vapor sample contains vapor and ambient air and surrounding natural background particles. The vapor sample detection method includes the steps of generating a supply of aerosol that have a particular effective median particle size, mixing the aerosol with the vapor sample forming aerosol and adsorbed vapor suspended in an air stream, impacting the suspended aerosol and adsorbed vapor upon a reflecting element, alternatively directing infrared light to the impacted aerosol and adsorbed vapor, detecting and analyzing the alternatively directed infrared light in essentially real time using a spectrometer and a microcomputer and identifying the vapor sample.
Near real time vapor detection and enhancement using aerosol adsorption
Novick, V.J.; Johnson, S.A.
1999-08-03
A vapor sample detection method is described where the vapor sample contains vapor and ambient air and surrounding natural background particles. The vapor sample detection method includes the steps of generating a supply of aerosol that have a particular effective median particle size, mixing the aerosol with the vapor sample forming aerosol and adsorbed vapor suspended in an air stream, impacting the suspended aerosol and adsorbed vapor upon a reflecting element, alternatively directing infrared light to the impacted aerosol and adsorbed vapor, detecting and analyzing the alternatively directed infrared light in essentially real time using a spectrometer and a microcomputer and identifying the vapor sample. 13 figs.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Smith, G. E.; Springer, G. S.; Rimon, Y.
1983-01-01
A method is presented for formulating the boundary conditions in implicit finite-difference form needed for obtaining solutions to the compressible Navier-Stokes equations by the Beam and Warming implicit factored method. The usefulness of the method was demonstrated (a) by establishing the boundary conditions applicable to the analysis of the flow inside an axisymmetric piston-cylinder configuration and (b) by calculating velocities and mass fractions inside the cylinder for different geometries and different operating conditions. Stability, selection of time step and grid sizes, and computer time requirements are discussed in reference to the piston-cylinder problem analyzed.
A random rule model of surface growth
NASA Astrophysics Data System (ADS)
Mello, Bernardo A.
2015-02-01
Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.
Pomeroy, Jeremy; Brage, Søren; Curtis, Jeffrey M; Swan, Pamela D; Knowler, William C; Franks, Paul W
2011-04-27
The quantification of the relationships between walking and health requires that walking is measured accurately. We correlated different measures of step accumulation to body size, overall physical activity level, and glucose regulation. Participants were 25 men and 25 women American Indians without diabetes (Age: 20-34 years) in Phoenix, Arizona, USA. We assessed steps/day during 7 days of free living, simultaneously with three different monitors (Accusplit-AX120, MTI-ActiGraph, and Dynastream-AMP). We assessed total physical activity during free-living with doubly labeled water combined with resting metabolic rate measured by expired gas indirect calorimetry. Glucose tolerance was determined during an oral glucose tolerance test. Based on observed counts in the laboratory, the AMP was the most accurate device, followed by the MTI and the AX120, respectively. The estimated energy cost of 1000 steps per day was lower in the AX120 than the MTI or AMP. The correlation between AX120-assessed steps/day and waist circumference was significantly higher than the correlation between AMP steps and waist circumference. The difference in steps per day between the AX120 and both the AMP and the MTI were significantly related to waist circumference. Between-monitor differences in step counts influence the observed relationship between walking and obesity-related traits.
Past and future challenges from a display mask writer perspective
NASA Astrophysics Data System (ADS)
Ekberg, Peter; von Sydow, Axel
2012-06-01
Since its breakthrough, the liquid crystal technology has continued to gain momentum and the LCD is today the dominating display type used in desktop monitors, television sets, mobile phones as well as other mobile devices. To improve production efficiency and enable larger screen sizes, the LCD industry has step by step increased the size of the mother glass used in the LCD manufacturing process. Initially the mother glass was only around 0.1 m2 large, but with each generation the size has increased and with generation 10 the area reaches close to 10 m2. The increase in mother glass size has in turn led to an increase in the size of the photomasks used - currently the largest masks are around 1.6 × 1.8 meters. A key mask performance criterion is the absence of "mura" - small systematic errors captured only by the very sensitive human eye. To eliminate such systematic errors, special techniques have been developed by Micronic Mydata. Some mura suppressing techniques are described in this paper. Today, the race towards larger glass sizes has come to a halt and a new race - towards higher resolution and better image quality - is ongoing. The display mask is therefore going through a change that resembles what the semiconductor mask went through some time ago: OPC features are introduced, CD requirements are increasing sharply and multi tone masks (MTMs) are widely used. Supporting this development, Micronic Mydata has introduced a number of compensation methods in the writer, such as Z-correction, CD map and distortion control. In addition, Micronic Mydata MMS15000, the world's most precise large area metrology tool, has played an important role in improving mask placement quality and is briefly described in this paper. Furthermore, proposed specifications and system architecture concept for a new generation mask writers - able to fulfill future image quality requirements - is presented in this paper. This new system would use an AOD/AOM writing engine and be capable of resolving 0.6 micron features.
Does an Adolescent’s Accuracy of Recall Improve with a Second 24-h Dietary Recall?
Kerr, Deborah A.; Wright, Janine L.; Dhaliwal, Satvinder S.; Boushey, Carol J.
2015-01-01
The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents’ accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24), were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; time and eating occasion; detailed description of the food/beverage) of the 24-h recall were assessed for matches by category. Differences were observed in the matching for the time and occasion step (p < 0.01), detailed description (p < 0.05) and portion size matching (p < 0.05). Omission rates were higher for the second recall (p < 0.05 quick list; p < 0.01 forgotten foods). The adolescents over-estimated energy intake on the first (11.3% ± 22.5%; p < 0.05) and second recall (10.1% ± 20.8%) compared with the known food and beverage items. These results suggest that the adolescents’ accuracy to recall food items declined with a second 24-h recall when repeated over two non-consecutive days. PMID:25984743
Sex and Caste-Specific Variation in Compound Eye Morphology of Five Honeybee Species
Streinzer, Martin; Brockmann, Axel; Nagaraja, Narayanappa; Spaethe, Johannes
2013-01-01
Ranging from dwarfs to giants, the species of honeybees show remarkable differences in body size that have placed evolutionary constrains on the size of sensory organs and the brain. Colonies comprise three adult phenotypes, drones and two female castes, the reproductive queen and sterile workers. The phenotypes differ with respect to tasks and thus selection pressures which additionally constrain the shape of sensory systems. In a first step to explore the variability and interaction between species size-limitations and sex and caste-specific selection pressures in sensory and neural structures in honeybees, we compared eye size, ommatidia number and distribution of facet lens diameters in drones, queens and workers of five species (Apis andreniformis, A. florea, A. dorsata, A. mellifera, A. cerana). In these species, male and female eyes show a consistent sex-specific organization with respect to eye size and regional specialization of facet diameters. Drones possess distinctly enlarged eyes with large dorsal facets. Aside from these general patterns, we found signs of unique adaptations in eyes of A. florea and A. dorsata drones. In both species, drone eyes are disproportionately enlarged. In A. dorsata the increased eye size results from enlarged facets, a likely adaptation to crepuscular mating flights. In contrast, the relative enlargement of A. florea drone eyes results from an increase in ommatidia number, suggesting strong selection for high spatial resolution. Comparison of eye morphology and published mating flight times indicates a correlation between overall light sensitivity and species-specific mating flight times. The correlation suggests an important role of ambient light intensities in the regulation of species-specific mating flight times and the evolution of the visual system. Our study further deepens insights into visual adaptations within the genus Apis and opens up future perspectives for research to better understand the timing mechanisms and sensory physiology of mating related signals. PMID:23460896
Prediction of flow dynamics using point processes
NASA Astrophysics Data System (ADS)
Hirata, Yoshito; Stemler, Thomas; Eroglu, Deniz; Marwan, Norbert
2018-01-01
Describing a time series parsimoniously is the first step to study the underlying dynamics. For a time-discrete system, a generating partition provides a compact description such that a time series and a symbolic sequence are one-to-one. But, for a time-continuous system, such a compact description does not have a solid basis. Here, we propose to describe a time-continuous time series using a local cross section and the times when the orbit crosses the local cross section. We show that if such a series of crossing times and some past observations are given, we can predict the system's dynamics with fine accuracy. This reconstructability neither depends strongly on the size nor the placement of the local cross section if we have a sufficiently long database. We demonstrate the proposed method using the Lorenz model as well as the actual measurement of wind speed.
High frequency copolymer ultrasonic transducer array of size-effective elements
NASA Astrophysics Data System (ADS)
Decharat, Adit; Wagle, Sanat; Habib, Anowarul; Jacobsen, Svein; Melandsø, Frank
2018-02-01
A layer-by-layer deposition method for producing dual-layer ultrasonic transducers from piezoelectric copolymers has been developed. The method uses a combination of customized and standard processing to obtain 2D array transducers with electrical connection of the individual elements routed directly to the rear of the substrate. A numerical model was implemented to study basic parameters effecting the transducer characteristics. Key elements of the array were characterized and evaluated, demonstrating its viability of 2D imaging. Signal reproducibility of the prototype array was studied by characterizing the variations of the center frequency (≈42 MHz) and bandwidth (≈25 MHz) of the acoustic. Object identification was also tested and parameterized by acoustic-field beamwidth as well as proper scan step size. Simple tests to illustrate a benefit of multi-element scan on lowering the inspection time were conducted. Structural imaging of the test structure underneath multi-layered wave media (glass plate and distilled water) was also performed. The prototype presented in this work is an important step towards realizing an inexpensive, compact array of individually operated copolymer transducers that can serve in a fast/volumetric high frequency (HF) ultrasonic scanning platform.
Delivery of high intensity beams with large clad step-index fibers for engine ignition
NASA Astrophysics Data System (ADS)
Joshi, Sachin; Wilvert, Nick; Yalin, Azer P.
2012-09-01
We show, for the first time, that step-index silica fibers with a large clad (400 μm core and 720 μm clad) can be used to transmit nanosecond duration pulses in a way that allows reliable (consistent) spark formation in atmospheric pressure air by the focused output light from the fiber. The high intensity (>100 GW/cm2) of the focused output light is due to the combination of high output power (typical of fibers of this core size) with high output beam quality (better than that typical of fibers of this core size). The high output beam quality, which enables tight focusing, is due to the large clad which suppresses microbending-induced diffusion of modal power to higher order modes owing to the increased rigidity of the core-clad interface. We also show that extending the pulse duration provides a means to increase the delivered pulse energy (>20 mJ delivered for 50 ns pulses) without causing fiber damage. Based on this ability to deliver high energy sparks, we report the first reliable laser ignition of a natural gas engine including startup under typical procedures using silica fiber optics for pulse delivery.
Dynamic Scaling and Island Growth Kinetics in Pulsed Laser Deposition of SrTiO 3
Eres, Gyula; Tischler, J. Z.; Rouleau, C. M.; ...
2016-11-11
We use real-time diffuse surface x-ray diffraction to probe the evolution of island size distributions and its effects on surface smoothing in pulsed laser deposition (PLD) of SrTiO 3. In this study, we show that the island size evolution obeys dynamic scaling and two distinct regimes of island growth kinetics. Our data show that PLD film growth can persist without roughening despite thermally driven Ostwald ripening, the main mechanism for surface smoothing, being shut down. The absence of roughening is concomitant with decreasing island density, contradicting the prevailing view that increasing island density is the key to surface smoothing inmore » PLD. We also report a previously unobserved crossover from diffusion-limited to attachment-limited island growth that reveals the influence of nonequilibrium atomic level surface transport processes on the growth modes in PLD. We show by direct measurements that attachment-limited island growth is the dominant process in PLD that creates step flowlike behavior or quasistep flow as PLD “self-organizes” local step flow on a length scale consistent with the substrate temperature and PLD parameters.« less
Dynamic Scaling and Island Growth Kinetics in Pulsed Laser Deposition of SrTiO 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eres, Gyula; Tischler, J. Z.; Rouleau, C. M.
We use real-time diffuse surface x-ray diffraction to probe the evolution of island size distributions and its effects on surface smoothing in pulsed laser deposition (PLD) of SrTiO 3. In this study, we show that the island size evolution obeys dynamic scaling and two distinct regimes of island growth kinetics. Our data show that PLD film growth can persist without roughening despite thermally driven Ostwald ripening, the main mechanism for surface smoothing, being shut down. The absence of roughening is concomitant with decreasing island density, contradicting the prevailing view that increasing island density is the key to surface smoothing inmore » PLD. We also report a previously unobserved crossover from diffusion-limited to attachment-limited island growth that reveals the influence of nonequilibrium atomic level surface transport processes on the growth modes in PLD. We show by direct measurements that attachment-limited island growth is the dominant process in PLD that creates step flowlike behavior or quasistep flow as PLD “self-organizes” local step flow on a length scale consistent with the substrate temperature and PLD parameters.« less
Framework for Creating a Smart Growth Economic Development Strategy
This step-by-step guide can help small and mid-sized cities, particularly those that have limited population growth, areas of disinvestment, and/or a struggling economy, build a place-based economic development strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Zhanying; Key Laboratory for Anisotropy and Texture of Materials, Northeastern University, Shenyang 110819, China,; Zhao, Gang
2015-04-15
The effect of two-step homogenization treatments on the precipitation behavior of Al{sub 3}Zr dispersoids was investigated by transmission electron microscopy (TEM) in 7150 alloys. Two-step treatments with the first step in the temperature range of 300–400 °C followed by the second step at 470 °C were applied during homogenization. Compared with the conventional one-step homogenization, both a finer particle size and a higher number density of Al{sub 3}Zr dispersoids were obtained with two-step homogenization treatments. The most effective dispersoid distribution was attained using the first step held at 300 °C. In addition, the two-step homogenization minimized the precipitate free zonesmore » and greatly increased the number density of dispersoids near dendrite grain boundaries. The effect of two-step homogenization on recrystallization resistance of 7150 alloys with different Zr contents was quantitatively analyzed using the electron backscattered diffraction (EBSD) technique. It was found that the improved dispersoid distribution through the two-step treatment can effectively inhibit the recrystallization process during the post-deformation annealing for 7150 alloys containing 0.04–0.09 wt.% Zr, resulting in a remarkable reduction of the volume fraction and grain size of recrystallization grains. - Highlights: • Effect of two-step homogenization on Al{sub 3}Zr dispersoids was investigated by TEM. • Finer and higher number of dispersoids obtained with two-step homogenization • Minimized the precipitate free zones and improved the dispersoid distribution • Recrystallization resistance with varying Zr content was quantified by EBSD. • Effectively inhibit the recrystallization through two-step treatments in 7150 alloy.« less
Li, Pu; Weng, Linlu; Niu, Haibo; Robinson, Brian; King, Thomas; Conmy, Robyn; Lee, Kenneth; Liu, Lei
2016-12-15
This study was aimed at testing the applicability of modified Weber number scaling with Alaska North Slope (ANS) crude oil, and developing a Reynolds number scaling approach for oil droplet size prediction for high viscosity oils. Dispersant to oil ratio and empirical coefficients were also quantified. Finally, a two-step Rosin-Rammler scheme was introduced for the determination of droplet size distribution. This new approach appeared more advantageous in avoiding the inconsistency in interfacial tension measurements, and consequently delivered concise droplet size prediction. Calculated and observed data correlated well based on Reynolds number scaling. The relation indicated that chemical dispersant played an important role in reducing the droplet size of ANS under different seasonal conditions. The proposed Reynolds number scaling and two-step Rosin-Rammler approaches provide a concise, reliable way to predict droplet size distribution, supporting decision making in chemical dispersant application during an offshore oil spill. Copyright © 2016 Elsevier Ltd. All rights reserved.
Control of Alginate Core Size in Alginate-Poly (Lactic-Co-Glycolic) Acid Microparticles
NASA Astrophysics Data System (ADS)
Lio, Daniel; Yeo, David; Xu, Chenjie
2016-01-01
Core-shell alginate-poly (lactic-co-glycolic) acid (PLGA) microparticles are potential candidates to improve hydrophilic drug loading while facilitating controlled release. This report studies the influence of the alginate core size on the drug release profile of alginate-PLGA microparticles and its size. Microparticles are synthesized through double-emulsion fabrication via a concurrent ionotropic gelation and solvent extraction. The size of alginate core ranges from approximately 10, 50, to 100 μm when the emulsification method at the first step is homogenization, vortexing, or magnetic stirring, respectively. The second step emulsification for all three conditions is performed with magnetic stirring. Interestingly, although the alginate core has different sizes, alginate-PLGA microparticle diameter does not change. However, drug release profiles are dramatically different for microparticles comprising different-sized alginate cores. Specifically, taking calcein as a model drug, microparticles containing the smallest alginate core (10 μm) show the slowest release over a period of 26 days with burst release less than 1 %.
Quantification of the evolution of firm size distributions due to mergers and acquisitions
Sornette, Didier
2017-01-01
The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company’s own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes. PMID:28841683
Liu, Dong; Wu, Lili; Li, Chunxiu; Ren, Shengqiang; Zhang, Jingquan; Li, Wei; Feng, Lianghuan
2015-08-05
The methylammonium lead halide perovskite solar cells have become very attractive because they can be prepared with low-cost solution-processable technology and their power conversion efficiency have been increasing from 3.9% to 20% in recent years. However, the high performance of perovskite photovoltaic devices are dependent on the complicated process to prepare compact perovskite films with large grain size. Herein, a new method is developed to achieve excellent CH3NH3PbI3-xClx film with fine morphology and crystallization based on one step deposition and two-step annealing process. This method include the spin coating deposition of the perovskite films with the precursor solution of PbI2, PbCl2, and CH3NH3I at the molar ratio 1:1:4 in dimethylformamide (DMF) and the post two-step annealing (TSA). The first annealing is achieved by solvent-induced process in DMF to promote migration and interdiffusion of the solvent-assisted precursor ions and molecules and realize large size grain growth. The second annealing is conducted by thermal-induced process to further improve morphology and crystallization of films. The compact perovskite films are successfully prepared with grain size up to 1.1 μm according to SEM observation. The PL decay lifetime, and the optic energy gap for the film with two-step annealing are 460 ns and 1.575 eV, respectively, while they are 307 and 327 ns and 1.577 and 1.582 eV for the films annealed in one-step thermal and one-step solvent process. On the basis of the TSA process, the photovoltaic devices exhibit the best efficiency of 14% under AM 1.5G irradiation (100 mW·cm(-2)).
NASA Astrophysics Data System (ADS)
He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong
2016-09-01
We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.
Monte Carlo modeling of single-molecule cytoplasmic dynein.
Singh, Manoranjan P; Mallik, Roop; Gross, Steven P; Yu, Clare C
2005-08-23
Molecular motors are responsible for active transport and organization in the cell, underlying an enormous number of crucial biological processes. Dynein is more complicated in its structure and function than other motors. Recent experiments have found that, unlike other motors, dynein can take different size steps along microtubules depending on load and ATP concentration. We use Monte Carlo simulations to model the molecular motor function of cytoplasmic dynein at the single-molecule level. The theory relates dynein's enzymatic properties to its mechanical force production. Our simulations reproduce the main features of recent single-molecule experiments that found a discrete distribution of dynein step sizes, depending on load and ATP concentration. The model reproduces the large steps found experimentally under high ATP and no load by assuming that the ATP binding affinities at the secondary sites decrease as the number of ATP bound to these sites increases. Additionally, to capture the essential features of the step-size distribution at very low ATP concentration and no load, the ATP hydrolysis of the primary site must be dramatically reduced when none of the secondary sites have ATP bound to them. We make testable predictions that should guide future experiments related to dynein function.
NASA Technical Reports Server (NTRS)
Peterson, B. M.; Berlind, P.; Bertram, R.; Bischoff, K.; Bochkarev, N. G.; Burenkov, A. N.; Calkins, M.; Carrasco, L.; Chavushyan, V. H.
2002-01-01
We present the final installment of an intensive 13 year study of variations of the optical continuum and broad H beta emission line in the Seyfert 1 galaxy NGC 5548. The database consists of 1530 optical continuum measurements and 1248 H beta measurements. The H beta variations follow the continuum variations closely, with a typical time delay of about 20 days. However, a year-by-year analysis shows that the magnitude of emission-line time delay is correlated with the mean continuum flux. We argue that the data are consistent with the simple model prediction between the size of the broad-line region and the ionizing luminosity, r is proportional to L(sup 1/2)(sub ion). Moreover, the apparently linear nature of the correlation between the H beta response time and the nonstellar optical continuum F(sub opt) arises as a consequence of the changing shape of the continuum as it varies, specifically F(sub opt) is proportional to F(sup 0.56)(sub UV).
Growth from Solutions: Kink dynamics, Stoichiometry, Face Kinetics and stability in turbulent flow
NASA Technical Reports Server (NTRS)
Chernov, A. A.; DeYoreo, J. J.; Rashkovich, L. N.; Vekilov, P. G.
2005-01-01
1. Kink dynamics. The first segment of a polygomized dislocation spiral step measured by AFM demonstrates up to 60% scattering in the critical length l*- the length when the segment starts to propagate. On orthorhombic lysozyme, this length is shorter than that the observed interkink distance. Step energy from the critical segment length based on the Gibbs-Thomson law (GTL), l* = 20(omega)alpha/(Delta)mu is several times larger than the energy from 2D nucleation rate. Here o is tine building block specific voiume, a is the step riser specific free energy, Delta(mu) is the crystallization driving force. These new data support our earlier assumption that the classical Frenkel, Burton -Cabrera-Frank concept of the abundant kink supply by fluctuations is not applicable for strongly polygonized steps. Step rate measurements on brushite confirms that statement. This is the1D nucleation of kinks that control step propagation. The GTL is valid only if l*
Luchini, Alessandra; Geho, David H.; Bishop, Barney; Tran, Duy; Xia, Cassandra; Dufour, Robert; Jones, Clint; Espina, Virginia; Patanarut, Alexis; Zhu, Weidong; Ross, Mark; Tessitore, Alessandra; Petricoin, Emanuel; Liotta, Lance A.
2010-01-01
Disease-associated blood biomarkers exist in exceedingly low concentrations within complex mixtures of high-abundance proteins such as albumin. We have introduced an affinity bait molecule into N-isopropylacrylamide to produce a particle that will perform three independent functions within minutes, in one step, in solution: a) molecular size sieving b) affinity capture of all solution phase target molecules, and c) complete protection of harvested proteins from enzymatic degradation. The captured analytes can be readily electroeluted for analysis. PMID:18076201
NASA Astrophysics Data System (ADS)
Ehsan Khaled, Mohammad; Zhang, Liangchi; Liu, Weidong
2018-07-01
The nanoscale thermal conductivity of a material can be significantly different from its value at the macroscale. Although a number of studies using the equilibrium molecular dynamics (EMD) with Green–Kubo (GK) formula have been conducted for nano-conductivity predictions, there are many problems in the analysis that have made the EMD results unreliable or misleading. This paper aims to clarify such critical issues through a thorough investigation on the effect and determination of the vital physical variables in the EMD-GK analysis, using the prediction of the nanoscale thermal conductivity of Si as an example. The study concluded that to have a reliable prediction, quantum correction, time step, simulation time, correlation time and system size are all crucial.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C; Quake, Stephen R; Burkholder, William F
2013-01-01
Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation.
Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C.; Quake, Stephen R.; Burkholder, William F.
2013-01-01
Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation. PMID:23894273
Establishing intensively cultured hybrid poplar plantations for fuel and fiber.
Edward Hansen; Lincoln Moore; Daniel Netzer; Michael Ostry; Howard Phipps; Jaroslav Zavitkovski
1983-01-01
This paper describes a step-by-step procedure for establishing commercial size intensively cultured plantations of hybrid poplar and summarizes the state-of-knowledge as developed during 10 years of field research at Rhinelander, Wisconsin.
FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model
Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid
2014-01-01
A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854
Impact of implementation choices on quantitative predictions of cell-based computational models
NASA Astrophysics Data System (ADS)
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rashidi, S.; Ataie, A., E-mail: aataie@ut.ac.ir
Highlights: • Single phase CoFe{sub 2}O{sub 4} nano-particles synthesized in one step by mechanical alloying. • PVA/CoFe{sub 2}O{sub 4} magnetic nano-composites were fabricated via mechanical milling. • FTIR confirmed the interaction between PVA and magnetic CoFe{sub 2}O{sub 4} particles. • Increasing in milling time and PVA amount led to well dispersion of CoFe{sub 2}O{sub 4}. - Abstract: In this research, polyvinyl alcohol/cobalt ferrite nano-composites were successfully synthesized employing a two-step procedure: the spherical single-phase cobalt ferrite of 20 ± 4 nm mean particle size was synthesized via mechanical alloying method and then embedded into polymer matrix by intensive milling. Themore » results revealed that increase in polyvinyl alcohol content and milling time causes cobalt ferrite particles disperse more homogeneously in polymer matrix, while the mean particle size and shape of cobalt ferrite have not been significantly affected. Transmission electron microscope images indicated that polyvinyl alcohol chains have surrounded the cobalt ferrite nano-particles; also, the interaction between polymer and cobalt ferrite particles in nano-composite samples was confirmed. Magnetic properties evaluation showed that saturation magnetization, coercivity and anisotropy constant values decreased in nano-composite samples compared to pure cobalt ferrite. However, the coercivity values of related nano-composite samples enhanced by increasing PVA amount due to domain wall mechanism.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tetsu, Hiroyuki; Nakamoto, Taishi, E-mail: h.tetsu@geo.titech.ac.jp
Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme,more » we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.« less
Differential Effects of Monovalent Cations and Anions on Key Nanoparticle Attributes
Understanding the key particle attributes such as particle size, size distribution and surface charge of both the nano- and micron-sized particles is the first step in drug formulation as such attributes are known to directly influence several characteristics of drugs including d...
NASA Astrophysics Data System (ADS)
Reynolds, C. A.; Menke, H. P.; Blunt, M. J.; Krevor, S. C.
2015-12-01
We observe a new type of non-wetting phase flow using time-resolved pore scale imaging. The traditional conceptual model of drainage involves a non-wetting phase invading a porous medium saturated with a wetting phase as either a fixed, connected flow path through the centres of pores or as discrete ganglia which move individually through the pore space, depending on the capillary number. We observe a new type of flow behaviour at low capillary number in which the flow of the non-wetting phase occurs through networks of persistent ganglia that occupy the large pores but continuously rearrange their connectivity (Figure 1). Disconnections and reconnections occur randomly to provide short-lived pseudo-steady state flow paths between pores. This process is distinctly different to the notion of flowing ganglia which coalesce and break-up. The size distribution of ganglia is dependent on capillary number. Experiments were performed by co-injecting N2and 25 wt% KI brine into a Bentheimer sandstone core (4mm diameter, 35mm length) at 50°C and 10 MPa. Drainage was performed at three flow rates (0.04, 0.3 and 1 ml/min) at a constant fractional flow of 0.5 and the variation in ganglia populations and connectivity observed. We obtained images of the pore space during steady state flow with a time resolution of 43 s over 1-2 hours. Experiments were performed at the Diamond Light Source synchrotron. Figure 1. The position of N2 in the pore space during steady state flow is summed over 40 time steps. White indicates that N2 occupies the space over >38 time steps and red <5 time steps.
Supersonic burning in separated flow regions
NASA Technical Reports Server (NTRS)
Zumwalt, G. W.
1982-01-01
The trough vortex phenomena is used for combustion of hydrogen in a supersonic air stream. This was done in small sizes suitable for igniters in supersonic combustion ramjets so long as the boundary layer displacement thickness is less than 25% of the trough step height. A simple electric spark, properly positioned, ignites the hydrogen in the trough corner. The resulting flame is self sustaining and reignitable. Hydrogen can be injected at the base wall or immediately upstream of the trough. The hydrogen is introduced at low velocity to permit it to be drawn into the corner vortex system and thus experience a long residence time in the combustion region. The igniters can be placed on a skewed back step for angles at least up to 30 deg. without affecting the igniter performance significantly. Certain metals (platinum, copper) act catalytically to improve ignition.
Variability in the Composition of Floating Microplastics by Region and in Time
NASA Astrophysics Data System (ADS)
Donohue, J. L.; Pavlekovsky, K.; Collins, T.; Andrady, A. L.; Proskurowski, G. K.; Lavender Law, K. L.
2016-02-01
Floating microplastics have been documented in all the subtropical oceans and in many regional seas, yet their origin and weathering history are largely unknown. To identify potential indicators of sources of microplastic debris and changes in input over time, we analyzed nearly 3,000 plastic particles collected using a surface-towing plankton net between 1991 and 2014, collected in the North Pacific subtropical gyre, in the Mediterranean Sea, and across the western North Atlantic basin including the subtropical gyre and coastal locations near urban areas. For each particle we analyzed particle form, size (longest dimension and 2-D surface area), mass, color characteristics and polymer type. We hypothesize that regional differences in average or median particle mass and size are a relative indicator of age (time of exposure), where accumulation zones that retain particles for long periods of time have statistically smaller fragments compared to regions closer to presumed sources. Differences in particle form (i.e., fragment, pellet, foam, line/fiber, film) might also reflect proximity to sources as well as form-dependent removal mechanisms such as density increase and sinking (Ryan 2015). Finally, changes in particle composition over time in subtropical gyre reservoirs could provide clues about changes in input as well as mechanisms and time scale of removal. Understanding the inputs, reservoirs, and sinks of open ocean microplastics is a necessary first step to evaluating their risks and impacts to marine life. Ryan, P., 2015. Does size and buoyancy affect the long-distance transport of floating debris? Environ. Res. Lett. 10 084019.
L, Frère; I, Paul-Pont; J, Moreau; P, Soudant; C, Lambert; A, Huvet; E, Rinnert
2016-12-15
Every step of microplastic analysis (collection, extraction and characterization) is time-consuming, representing an obstacle to the implementation of large scale monitoring. This study proposes a semi-automated Raman micro-spectroscopy method coupled to static image analysis that allows the screening of a large quantity of microplastic in a time-effective way with minimal machine operator intervention. The method was validated using 103 particles collected at the sea surface spiked with 7 standard plastics: morphological and chemical characterization of particles was performed in <3h. The method was then applied to a larger environmental sample (n=962 particles). The identification rate was 75% and significantly decreased as a function of particle size. Microplastics represented 71% of the identified particles and significant size differences were observed: polystyrene was mainly found in the 2-5mm range (59%), polyethylene in the 1-2mm range (40%) and polypropylene in the 0.335-1mm range (42%). Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nemes, Csaba; Barcza, Gergely; Nagy, Zoltán; Legeza, Örs; Szolgay, Péter
2014-06-01
In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU-GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix-vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.
Opinion formation on adaptive networks with intensive average degree
NASA Astrophysics Data System (ADS)
Schmittmann, B.; Mukhopadhyay, Abhishek
2010-12-01
We study the evolution of binary opinions on a simple adaptive network of N nodes. At each time step, a randomly selected node updates its state (“opinion”) according to the majority opinion of the nodes that it is linked to; subsequently, all links are reassigned with probability p˜ (q˜) if they connect nodes with equal (opposite) opinions. In contrast to earlier work, we ensure that the average connectivity (“degree”) of each node is independent of the system size (“intensive”), by choosing p˜ and q˜ to be of O(1/N) . Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. We find two absorbing states, characterized by perfect consensus, and one metastable state, characterized by a population split evenly between the two opinions. The relaxation time of this state grows exponentially with the number of nodes, N . A second metastable state, found in the earlier studies, is no longer observed.