NASA Astrophysics Data System (ADS)
Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.
2018-04-01
An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.
2018-01-30
In this study, an optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubicmore » "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a condition on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.« less
A local time stepping algorithm for GPU-accelerated 2D shallow water models
NASA Astrophysics Data System (ADS)
Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo
2018-01-01
In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.
van Mierlo, Pieter; Lie, Octavian; Staljanssens, Willeke; Coito, Ana; Vulliémoz, Serge
2018-04-26
We investigated the influence of processing steps in the estimation of multivariate directed functional connectivity during seizures recorded with intracranial EEG (iEEG) on seizure-onset zone (SOZ) localization. We studied the effect of (i) the number of nodes, (ii) time-series normalization, (iii) the choice of multivariate time-varying connectivity measure: Adaptive Directed Transfer Function (ADTF) or Adaptive Partial Directed Coherence (APDC) and (iv) graph theory measure: outdegree or shortest path length. First, simulations were performed to quantify the influence of the various processing steps on the accuracy to localize the SOZ. Afterwards, the SOZ was estimated from a 113-electrodes iEEG seizure recording and compared with the resection that rendered the patient seizure-free. The simulations revealed that ADTF is preferred over APDC to localize the SOZ from ictal iEEG recordings. Normalizing the time series before analysis resulted in an increase of 25-35% of correctly localized SOZ, while adding more nodes to the connectivity analysis led to a moderate decrease of 10%, when comparing 128 with 32 input nodes. The real-seizure connectivity estimates localized the SOZ inside the resection area using the ADTF coupled to outdegree or shortest path length. Our study showed that normalizing the time-series is an important pre-processing step, while adding nodes to the analysis did only marginally affect the SOZ localization. The study shows that directed multivariate Granger-based connectivity analysis is feasible with many input nodes (> 100) and that normalization of the time-series before connectivity analysis is preferred.
Step scaling and the Yang-Mills gradient flow
NASA Astrophysics Data System (ADS)
Lüscher, Martin
2014-06-01
The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.
Li, Zhao; Dosso, Stan E; Sun, Dajun
2016-07-01
This letter develops a Bayesian inversion for localizing underwater acoustic transponders using a surface ship which compensates for sound-speed profile (SSP) temporal variation during the survey. The method is based on dividing observed acoustic travel-time data into time segments and including depth-independent SSP variations for each segment as additional unknown parameters to approximate the SSP temporal variation. SSP variations are estimated jointly with transponder locations, rather than calculated separately as in existing two-step inversions. Simulation and sea-trial results show this localization/SSP joint inversion performs better than two-step inversion in terms of localization accuracy, agreement with measured SSP variations, and computational efficiency.
The USEPA has developed a handbook to help state and local governmental officials implement near-real-time water quality monitoring and outreach programs with step-by-step instructions on how to: 1) Employ satellite and robotic water monitoring equipment, 2) collect, transfer, an...
The constant displacement scheme for tracking particles in heterogeneous aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, X.H.; Gomez-Hernandez, J.J.
1996-01-01
Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times fastermore » than using the constant time step scheme.« less
Local anesthesia for inguinal hernia repair step-by-step procedure.
Amid, P K; Shulman, A G; Lichtenstein, I L
1994-01-01
OBJECTIVE. The authors introduce a simple six-step infiltration technique that results in satisfactory local anesthesia and prolonged postoperative analgesia, requiring a maximum of 30 to 40 mL of local anesthetic solution. SUMMARY BACKGROUND DATA. For the last 20 years, more than 12,000 groin hernia repairs have been performed under local anesthesia at the Lichtenstein Hernia Institute. Initially, field block was the mean of achieving local anesthesia. During the last 5 years, a simple infiltration technique has been used because the field block was more time consuming and required larger volume of the local anesthetic solution. Furthermore, because of the blind nature of the procedure, it did not always result in satisfactory anesthesia and, at times, accidental needle puncture of the ilioinguinal nerve resulted in prolonged postoperative pain, burning, or electric shock sensation within the field of the ilioinguinal nerve innervation. METHODS. More than 12,000 patients underwent operations in a private practice setting in general hospitals. RESULTS. For 2 decades, more than 12,000 adult patients with reducible groin hernias satisfactorily underwent operations under local anesthesia without complications. CONCLUSIONS. The preferred choice of anesthesia for all reducible adult inguinal hernia repair is local. It is safe, simple, effective, and economical, without postanesthesia side effects. Furthermore, local anesthesia administered before the incision produces longer postoperative analgesia because local infiltration, theoretically, inhibits build-up of local nociceptive molecules and, therefore, there is better pain control in the postoperative period. Images Figure 1. Figure 2. PMID:7986138
Quantum transport with long-range steps on Watts-Strogatz networks
NASA Astrophysics Data System (ADS)
Wang, Yan; Xu, Xin-Jian
2016-07-01
We study transport dynamics of quantum systems with long-range steps on the Watts-Strogatz network (WSN) which is generated by rewiring links of the regular ring. First, we probe physical systems modeled by the discrete nonlinear schrödinger (DNLS) equation. Using the localized initial condition, we compute the time-averaged occupation probability of the initial site, which is related to the nonlinearity, the long-range steps and rewiring links. Self-trapping transitions occur at large (small) nonlinear parameters for coupling ɛ=-1 (1), as long-range interactions are intensified. The structure disorder induced by random rewiring, however, has dual effects for ɛ=-1 and inhibits the self-trapping behavior for ɛ=1. Second, we investigate continuous-time quantum walks (CTQW) on the regular ring ruled by the discrete linear schrödinger (DLS) equation. It is found that only the presence of the long-range steps does not affect the efficiency of the coherent exciton transport, while only the allowance of random rewiring enhances the partial localization. If both factors are considered simultaneously, localization is greatly strengthened, and the transport becomes worse.
NASA Astrophysics Data System (ADS)
Smerieri, M.; Vattuone, L.; Savio, L.; Langer, T.; Tegenkamp, C.; Pfnür, H.; Silkin, V. M.; Rocca, M.
2014-10-01
Understanding acoustic surface plasmons (ASPs) in the presence of nanosized gratings is necessary for the development of future devices that couple light with ASPs. We show here by experiment and theory that two ASPs exist on Au(788), a vicinal surface with an ordered array of monoatomic steps. The ASPs propagate across the steps as long as their wavelength exceeds the terrace width, thereafter becoming localized. Our investigation identifies, for the first time, ASPs coupled with intersubband transitions involving multiple surface-state subbands.
NASA Technical Reports Server (NTRS)
Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)
2003-01-01
A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.
1991-01-01
A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
NASA Astrophysics Data System (ADS)
Roh, Joon-Woo; Jee, Joon-Bum; Lim, A.-Young; Choi, Young-Jean
2015-04-01
Korean warm-season rainfall, accounting for about three-fourths of the annual precipitation, is primarily caused by Changma front, which is a kind of the East Asian summer monsoon, and localized heavy rainfall with convective instability. Various physical mechanisms potentially exert influences on heavy precipitation over South Korea. Representatively, the middle latitude and subtropical weather fronts, associated with a quasi-stationary moisture convergence zone among varying air masses, make up one of the main rain-bearing synoptic scale systems. Localized heavy rainfall events in South Korea generally arise from mesoscale convective systems embedded in these synoptic scale disturbances along the Changma front or convective instabilities resulted from unstable air mass including the direct or indirect effect of typhoons. In recent years, torrential rainfalls, which are more than 30mm/hour of precipitation amount, in warm-season has increased threefold in Seoul, which is a metropolitan city in South Korea. In order to investigate multiple potential causes of warm-season localized heavy precipitation in South Korea, a localized heavy precipitation case took place on 20 June 2014 at Seoul. This case was mainly seen to be caused by short-wave trough, which is associated with baroclinic instability in the northwest of Korea, and a thermal low, which has high moist and warm air through analysis. This structure showed convective scale torrential rain was embedded in the dynamic and in the thermodynamic structures. In addition to, a sensitivity of rainfall amount and maximum rainfall location to the integration time-step sizes was investigated in the simulations of a localized heavy precipitation case using Weather Research and Forecasting model. The simulation of time-step sizes of 9-27s corresponding to a horizontal resolution of 4.5km and 1.5km varied slightly difference of the maximum rainfall amount. However, the sensitivity of spatial patterns and temporal variations in rainfall were relatively small for the time-step sizes. The effect of topography was also important in the localized heavy precipitation simulation.
NASA Astrophysics Data System (ADS)
Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries
2017-08-01
This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.
Partition-based discrete-time quantum walks
NASA Astrophysics Data System (ADS)
Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo
2018-04-01
We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.
Adaptive Implicit Non-Equilibrium Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philip, Bobby; Wang, Zhen; Berrill, Mark A
2013-01-01
We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2004-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2002-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
The long-term changes in total ozone, as derived from Dobson measurements at Arosa (1948-2001)
NASA Astrophysics Data System (ADS)
Krzyscin, J. W.
2003-04-01
The longest possible total ozone time series (Arosa, Switzerland) is examined for a detection of trends. Two-step procedure is proposed to estimate the long-term (decadal) variations in the ozone time series. The first step consists of a standard least-squares multiple regression applied to the total ozone monthly means to parameterize "natural" (related to the oscillations in the atmospheric dynamics) variations in the analyzed time series. The standard proxies for the dynamical ozone variations are used including; the 11-year solar activity cycle, and indices of QBO, ENSO and NAO. We use the detrended time series of temperature at 100 hPa and 500 hPa over Arosa to parameterize short-term variations (with time periods<1 year) in total ozone related to local changes in the meteorological conditions over the station. The second step consists of a smooth-curve fitting to the total ozone residuals (original minus modeled "natural" time series), the time derivation applied to this curve to obtain local trends, and bootstrapping of the residual time series to estimate the standard error of local trends. Locally weighted regression and the wavelet analysis methodology are used to extract the smooth component out of the residual time series. The time integral over the local trend values provides the cumulative long-term change since the data beginning. Examining the pattern of the cumulative change we see the periods with total ozone loss (the end of 50s up to early 60s - probably the effect of the nuclear bomb tests), recovery (mid 60s up to beginning of 70s), apparent decrease (beginning of 70s lasting to mid 90s - probably the effect of the atmosphere contamination by anthropogenic substances containing chlorine), and with a kind of stabilization or recovery (starting in the mid of 90s - probably the effect of the Montreal protocol to eliminate substances reducing the ozone layer). We can also estimate that a full ozone recovery (return to the undisturbed total ozone level from the beginning of 70s) is expected around 2050. We propose to calculate both time series of local trends and the cumulative long-term change instead single trend value derived as a slope of straight line fit to the data.
Asynchronous variational integration using continuous assumed gradient elements.
Wolff, Sebastian; Bucher, Christian
2013-03-01
Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.
Computer implemented empirical mode decomposition method, apparatus and article of manufacture
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
1999-01-01
A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.
Study of Ion Beam Forming Process in Electric Thruster Using 3D FEM Simulation
NASA Astrophysics Data System (ADS)
Huang, Tao; Jin, Xiaolin; Hu, Quan; Li, Bin; Yang, Zhonghai
2015-11-01
There are two algorithms to simulate the process of ion beam forming in electric thruster. The one is electrostatic steady state algorithm. Firstly, an assumptive surface, which is enough far from the accelerator grids, launches the ion beam. Then the current density is calculated by theory formula. Secondly these particles are advanced one by one according to the equations of the motions of ions until they are out of the computational region. Thirdly, the electrostatic potential is recalculated and updated by solving Poisson Equation. At the end, the convergence is tested to determine whether the calculation should continue. The entire process will be repeated until the convergence is reached. Another one is time-depended PIC algorithm. In a global time step, we assumed that some new particles would be produced in the simulation domain and its distribution of position and velocity were certain. All of the particles that are still in the system will be advanced every local time steps. Typically, we set the local time step low enough so that the particle needs to be advanced about five times to move the distance of the edge of the element in which the particle is located.
Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David
2012-10-01
The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.
NASA Technical Reports Server (NTRS)
Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.
2009-01-01
.We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Structural-change localization and monitoring through a perturbation-based inverse problem.
Roux, Philippe; Guéguen, Philippe; Baillet, Laurent; Hamze, Alaa
2014-11-01
Structural-change detection and characterization, or structural-health monitoring, is generally based on modal analysis, for detection, localization, and quantification of changes in structure. Classical methods combine both variations in frequencies and mode shapes, which require accurate and spatially distributed measurements. In this study, the detection and localization of a local perturbation are assessed by analysis of frequency changes (in the fundamental mode and overtones) that are combined with a perturbation-based linear inverse method and a deconvolution process. This perturbation method is applied first to a bending beam with the change considered as a local perturbation of the Young's modulus, using a one-dimensional finite-element model for modal analysis. Localization is successful, even for extended and multiple changes. In a second step, the method is numerically tested under ambient-noise vibration from the beam support with local changes that are shifted step by step along the beam. The frequency values are revealed using the random decrement technique that is applied to the time-evolving vibrations recorded by one sensor at the free extremity of the beam. Finally, the inversion method is experimentally demonstrated at the laboratory scale with data recorded at the free end of a Plexiglas beam attached to a metallic support.
Automatic localization of cochlear implant electrodes in CTs with a limited intensity range
NASA Astrophysics Data System (ADS)
Zhao, Yiyuan; Dawant, Benoit M.; Noble, Jack H.
2017-02-01
Cochlear implants (CIs) are neural prosthetics for treating severe-to-profound hearing loss. Our group has developed an image-guided cochlear implant programming (IGCIP) system that uses image analysis techniques to recommend patientspecific CI processor settings to improve hearing outcomes. One crucial step in IGCIP is the localization of CI electrodes in post-implantation CTs. Manual localization of electrodes requires time and expertise. To automate this process, our group has proposed automatic techniques that have been validated on CTs acquired with scanners that produce images with an extended range of intensity values. However, there are many clinical CTs acquired with a limited intensity range. This limitation complicates the electrode localization process. In this work, we present a pre-processing step for CTs with a limited intensity range and extend the methods we proposed for full intensity range CTs to localize CI electrodes in CTs with limited intensity range. We evaluate our method on CTs of 20 subjects implanted with CI arrays produced by different manufacturers. Our method achieves a mean localization error of 0.21mm. This indicates our method is robust for automatic localization of CI electrodes in different types of CTs, which represents a crucial step for translating IGCIP from research laboratory to clinical use.
Real-time localization of mobile device by filtering method for sensor fusion
NASA Astrophysics Data System (ADS)
Fuse, Takashi; Nagara, Keita
2017-06-01
Most of the applications with mobile devices require self-localization of the devices. GPS cannot be used in indoor environment, the positions of mobile devices are estimated autonomously by using IMU. Since the self-localization is based on IMU of low accuracy, and then the self-localization in indoor environment is still challenging. The selflocalization method using images have been developed, and the accuracy of the method is increasing. This paper develops the self-localization method without GPS in indoor environment by integrating sensors, such as IMU and cameras, on mobile devices simultaneously. The proposed method consists of observations, forecasting and filtering. The position and velocity of the mobile device are defined as a state vector. In the self-localization, observations correspond to observation data from IMU and camera (observation vector), forecasting to mobile device moving model (system model) and filtering to tracking method by inertial surveying and coplanarity condition and inverse depth model (observation model). Positions of a mobile device being tracked are estimated by system model (forecasting step), which are assumed as linearly moving model. Then estimated positions are optimized referring to the new observation data based on likelihood (filtering step). The optimization at filtering step corresponds to estimation of the maximum a posterior probability. Particle filter are utilized for the calculation through forecasting and filtering steps. The proposed method is applied to data acquired by mobile devices in indoor environment. Through the experiments, the high performance of the method is confirmed.
The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI
Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain
2018-01-01
Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg). Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency. PMID:29497372
The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI.
Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain
2018-01-01
Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg) . Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.
Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.; vanLeer, Bram
1999-01-01
A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.
SW#db: GPU-Accelerated Exact Sequence Similarity Database Search.
Korpar, Matija; Šošić, Martin; Blažeka, Dino; Šikić, Mile
2015-01-01
In recent years we have witnessed a growth in sequencing yield, the number of samples sequenced, and as a result-the growth of publicly maintained sequence databases. The increase of data present all around has put high requirements on protein similarity search algorithms with two ever-opposite goals: how to keep the running times acceptable while maintaining a high-enough level of sensitivity. The most time consuming step of similarity search are the local alignments between query and database sequences. This step is usually performed using exact local alignment algorithms such as Smith-Waterman. Due to its quadratic time complexity, alignments of a query to the whole database are usually too slow. Therefore, the majority of the protein similarity search methods prior to doing the exact local alignment apply heuristics to reduce the number of possible candidate sequences in the database. However, there is still a need for the alignment of a query sequence to a reduced database. In this paper we present the SW#db tool and a library for fast exact similarity search. Although its running times, as a standalone tool, are comparable to the running times of BLAST, it is primarily intended to be used for exact local alignment phase in which the database of sequences has already been reduced. It uses both GPU and CPU parallelization and was 4-5 times faster than SSEARCH, 6-25 times faster than CUDASW++ and more than 20 times faster than SSW at the time of writing, using multiple queries on Swiss-prot and Uniref90 databases.
Visualization of time-varying MRI data for MS lesion analysis
NASA Astrophysics Data System (ADS)
Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella
2001-05-01
Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.
Dynamical continuous time random Lévy flights
NASA Astrophysics Data System (ADS)
Liu, Jian; Chen, Xiaosong
2016-03-01
The Lévy flights' diffusive behavior is studied within the framework of the dynamical continuous time random walk (DCTRW) method, while the nonlinear friction is introduced in each step. Through the DCTRW method, Lévy random walker in each step flies by obeying the Newton's Second Law while the nonlinear friction f(v) = - γ0v - γ2v3 being considered instead of Stokes friction. It is shown that after introducing the nonlinear friction, the superdiffusive Lévy flights converges, behaves localization phenomenon with long time limit, but for the Lévy index μ = 2 case, it is still Brownian motion.
Multigrid solution of compressible turbulent flow on unstructured meshes using a two-equation model
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Matinelli, L.
1994-01-01
The steady state solution of the system of equations consisting of the full Navier-Stokes equations and two turbulence equations has been obtained using a multigrid strategy of unstructured meshes. The flow equations and turbulence equations are solved in a loosely coupled manner. The flow equations are advanced in time using a multistage Runge-Kutta time-stepping scheme with a stability-bound local time step, while turbulence equations are advanced in a point-implicit scheme with a time step which guarantees stability and positivity. Low-Reynolds-number modifications to the original two-equation model are incorporated in a manner which results in well-behaved equations for arbitrarily small wall distances. A variety of aerodynamic flows are solved, initializing all quantities with uniform freestream values. Rapid and uniform convergence rates for the flow and turbulence equations are observed.
Optimal subinterval selection approach for power system transient stability simulation
Kim, Soobae; Overbye, Thomas J.
2015-10-21
Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less
Faculty Articulation with Feeder High Schools and Local Employers.
ERIC Educational Resources Information Center
Parrott, Marietta
As a first step in developing an articulation plan with feeder high schools, a College of the Sequoias (COS) task force developed and distributed a survey to all full-time faculty members to determine if individual faculty members were articulating with feeder high schools and local businesses, and if they would be willing to participate in an…
RF-Based Location Using Interpolation Functions to Reduce Fingerprint Mapping
Ezpeleta, Santiago; Claver, José M.; Pérez-Solano, Juan J.; Martí, José V.
2015-01-01
Indoor RF-based localization using fingerprint mapping requires an initial training step, which represents a time consuming process. This location methodology needs a database conformed with RSSI (Radio Signal Strength Indicator) measures from the communication transceivers taken at specific locations within the localization area. But, the real world localization environment is dynamic and it is necessary to rebuild the fingerprint database when some environmental changes are made. This paper explores the use of different interpolation functions to complete the fingerprint mapping needed to achieve the sought accuracy, thereby reducing the effort in the training step. Also, different distributions of test maps and reference points have been evaluated, showing the validity of this proposal and necessary trade-offs. Results reported show that the same or similar localization accuracy can be achieved even when only 50% of the initial fingerprint reference points are taken. PMID:26516862
Regularized two-step brain activity reconstruction from spatiotemporal EEG data
NASA Astrophysics Data System (ADS)
Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry
2004-10-01
We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.
A multi-step system for screening and localization of hard exudates in retinal images
NASA Astrophysics Data System (ADS)
Bopardikar, Ajit S.; Bhola, Vishal; Raghavendra, B. S.; Narayanan, Rangavittal
2012-03-01
The number of people being affected by Diabetes mellitus worldwide is increasing at an alarming rate. Monitoring of the diabetic condition and its effects on the human body are therefore of great importance. Of particular interest is diabetic retinopathy (DR) which is a result of prolonged, unchecked diabetes and affects the visual system. DR is a leading cause of blindness throughout the world. At any point of time 25 - 44% of people with diabetes are afflicted by DR. Automation of the screening and monitoring process for DR is therefore essential for efficient utilization of healthcare resources and optimizing treatment of the affected individuals. Such automation would use retinal images and detect the presence of specific artifacts such as hard exudates, hemorrhages and soft exudates (that may appear in the image) to gauge the severity of DR. In this paper, we focus on the detection of hard exudates. We propose a two step system that consists of a screening step that classifies retinal images as normal or abnormal based on the presence of hard exudates and a detection stage that localizes these artifacts in an abnormal retinal image. The proposed screening step automatically detects the presence of hard exudates with a high sensitivity and positive predictive value (PPV ). The detection/localization step uses a k-means based clustering approach to localize hard exudates in the retinal image. Suitable feature vectors are chosen based on their ability to isolate hard exudates while minimizing false detections. The algorithm was tested on a benchmark dataset (DIARETDB1) and was seen to provide a superior performance compared to existing methods. The two-step process described in this paper can be embedded in a tele-ophthalmology system to aid with speedy detection and diagnosis of the severity of DR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.
2014-09-01
We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less
Convergence speeding up in the calculation of the viscous flow about an airfoil
NASA Technical Reports Server (NTRS)
Radespiel, R.; Rossow, C.
1988-01-01
A finite volume method to solve the three dimensional Navier-Stokes equations was developed. It is based on a cell-vertex scheme with central differences and explicit Runge-Kutta time steps. A good convergence for a stationary solution was obtained by the use of local time steps, implicit smoothing of the residues, a multigrid algorithm, and a carefully controlled artificial dissipative term. The method is illustrated by results for transonic profiles and airfoils. The method allows a routine solution of the Navier-Stokes equations.
Global phenomena from local rules: Peer-to-peer networks and crystal steps
NASA Astrophysics Data System (ADS)
Finkbiner, Amy
Even simple, deterministic rules can generate interesting behavior in dynamical systems. This dissertation examines some real world systems for which fairly simple, locally defined rules yield useful or interesting properties in the system as a whole. In particular, we study routing in peer-to-peer networks and the motion of crystal steps. Peers can vary by three orders of magnitude in their capacities to process network traffic. This heterogeneity inspires our use of "proportionate load balancing," where each peer provides resources in proportion to its individual capacity. We provide an implementation that employs small, local adjustments to bring the entire network into a global balance. Analytically and through simulations, we demonstrate the effectiveness of proportionate load balancing on two routing methods for de Bruijn graphs, introducing a new "reversed" routing method which performs better than standard forward routing in some cases. The prevalence of peer-to-peer applications prompts companies to locate the hosts participating in these networks. We explore the use of supervised machine learning to identify peer-to-peer hosts, without using application-specific information. We introduce a model for "triples," which exploits information about nearly contemporaneous flows to give a statistical picture of a host's activities. We find that triples, together with measurements of inbound vs. outbound traffic, can capture most of the behavior of peer-to-peer hosts. An understanding of crystal surface evolution is important for the development of modern nanoscale electronic devices. The most commonly studied surface features are steps, which form at low temperatures when the crystal is cut close to a plane of symmetry. Step bunching, when steps arrange into widely separated clusters of tightly packed steps, is one important step phenomenon. We analyze a discrete model for crystal steps, in which the motion of each step depends on the two steps on either side of it. We find an time-dependence term for the motion that does not appear in continuum models, and we determine an explicit dependence on step number.
Prediction of flow dynamics using point processes
NASA Astrophysics Data System (ADS)
Hirata, Yoshito; Stemler, Thomas; Eroglu, Deniz; Marwan, Norbert
2018-01-01
Describing a time series parsimoniously is the first step to study the underlying dynamics. For a time-discrete system, a generating partition provides a compact description such that a time series and a symbolic sequence are one-to-one. But, for a time-continuous system, such a compact description does not have a solid basis. Here, we propose to describe a time-continuous time series using a local cross section and the times when the orbit crosses the local cross section. We show that if such a series of crossing times and some past observations are given, we can predict the system's dynamics with fine accuracy. This reconstructability neither depends strongly on the size nor the placement of the local cross section if we have a sufficiently long database. We demonstrate the proposed method using the Lorenz model as well as the actual measurement of wind speed.
Multigrid time-accurate integration of Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1993-01-01
Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.
Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.
Choi, Yun Ho; Yoo, Sung Jin
2016-12-01
A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.
Multigrid solution of compressible turbulent flow on unstructured meshes using a two-equation model
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Martinelli, L.
1991-01-01
The system of equations consisting of the full Navier-Stokes equations and two turbulence equations was solved for in the steady state using a multigrid strategy on unstructured meshes. The flow equations and turbulence equations are solved in a loosely coupled manner. The flow equations are advanced in time using a multistage Runge-Kutta time stepping scheme with a stability bound local time step, while the turbulence equations are advanced in a point-implicit scheme with a time step which guarantees stability and positively. Low Reynolds number modifications to the original two equation model are incorporated in a manner which results in well behaved equations for arbitrarily small wall distances. A variety of aerodynamic flows are solved for, initializing all quantities with uniform freestream values, and resulting in rapid and uniform convergence rates for the flow and turbulence equations.
NASA Technical Reports Server (NTRS)
Janus, J. Mark; Whitfield, David L.
1990-01-01
Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C; Kumarasiri, A; Chetvertkov, M
2015-06-15
Purpose: Accurate deformable image registration (DIR) between CT and CBCT in H&N is challenging. In this study, we propose a practical hybrid method that uses not only the pixel intensities but also organ physical properties, structure volume of interest (VOI), and interactive local registrations. Methods: Five oropharyngeal cancer patients were selected retrospectively. For each patient, the planning CT was registered to the last fraction CBCT, where the anatomy difference was largest. A three step registration strategy was tested; Step1) DIR using pixel intensity only, Step2) DIR with additional use of structure VOI and rigidity penalty, and Step3) interactive local correction.more » For Step1, a public-domain open-source DIR algorithm was used (cubic B-spline, mutual information, steepest gradient optimization, and 4-level multi-resolution). For Step2, rigidity penalty was applied on bony anatomies and brain, and a structure VOI was used to handle the body truncation such as the shoulder cut-off on CBCT. Finally, in Step3, the registrations were reviewed on our in-house developed software and the erroneous areas were corrected via a local registration using level-set motion algorithm. Results: After Step1, there were considerable amount of registration errors in soft tissues and unrealistic stretching in the posterior to the neck and near the shoulder due to body truncation. The brain was also found deformed to a measurable extent near the superior border of CBCT. Such errors could be effectively removed by using a structure VOI and rigidity penalty. The rest of the local soft tissue error could be corrected using the interactive software tool. The estimated interactive correction time was approximately 5 minutes. Conclusion: The DIR using only the image pixel intensity was vulnerable to noise and body truncation. A corrective action was inevitable to achieve good quality of registrations. We found the proposed three-step hybrid method efficient and practical for CT/CBCT registrations in H&N. My department receives grant support from Industrial partners: (a) Varian Medical Systems, Palo Alto, CA, and (b) Philips HealthCare, Best, Netherlands.« less
Look and Feel: Haptic Interaction for Biomedicine
1995-10-01
algorithm that is evaluated within the topology of the model. During each time step, forces are summed for each mobile atom based on external forces...volumetric properties; (b) conserving computation power by rendering media local to the interaction point; and (c) evaluating the simulation within...alteration of the model topology. Simulation of the DSM state is accomplished by a multi-step algorithm that is evaluated within the topology of the
A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, K; Seymour, R; Wang, W
2009-02-17
A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less
Adaptive multi-time-domain subcycling for crystal plasticity FE modeling of discrete twin evolution
NASA Astrophysics Data System (ADS)
Ghosh, Somnath; Cheng, Jiahao
2018-02-01
Crystal plasticity finite element (CPFE) models that accounts for discrete micro-twin nucleation-propagation have been recently developed for studying complex deformation behavior of hexagonal close-packed (HCP) materials (Cheng and Ghosh in Int J Plast 67:148-170, 2015, J Mech Phys Solids 99:512-538, 2016). A major difficulty with conducting high fidelity, image-based CPFE simulations of polycrystalline microstructures with explicit twin formation is the prohibitively high demands on computing time. High strain localization within fast propagating twin bands requires very fine simulation time steps and leads to enormous computational cost. To mitigate this shortcoming and improve the simulation efficiency, this paper proposes a multi-time-domain subcycling algorithm. It is based on adaptive partitioning of the evolving computational domain into twinned and untwinned domains. Based on the local deformation-rate, the algorithm accelerates simulations by adopting different time steps for each sub-domain. The sub-domains are coupled back after coarse time increments using a predictor-corrector algorithm at the interface. The subcycling-augmented CPFEM is validated with a comprehensive set of numerical tests. Significant speed-up is observed with this novel algorithm without any loss of accuracy that is advantageous for predicting twinning in polycrystalline microstructures.
Oliver, M; McConnell, D; Romani, M; McAllister, A; Pearce, A; Andronowski, A; Wang, X; Leszczynski, K
2012-01-01
Objective The primary purpose of this study was to assess the practical trade-offs between intensity-modulated radiation therapy (IMRT) and dual-arc volumetric-modulated arc therapy (DA-VMAT) for locally advanced head and neck cancer (HNC). Methods For 15 locally advanced HNC data sets, nine-field step-and-shoot IMRT plans and two full-rotation DA-VMAT treatment plans were created in the Pinnacle3 v. 9.0 (Philips Medical Systems, Fitchburg, WI) treatment planning environment and then delivered on a Clinac iX (Varian Medical Systems, Palo Alto, CA) to a cylindrical detector array. The treatment planning goals were organised into four groups based on their importance: (1) spinal cord, brainstem, optical structures; (2) planning target volumes; (3) parotids, mandible, larynx and brachial plexus; and (4) normal tissues. Results Compared with IMRT, DA-VMAT plans were of equal plan quality (p>0.05 for each group), able to be delivered in a shorter time (3.1 min vs 8.3 min, p<0.0001), delivered fewer monitor units (on average 28% fewer, p<0.0001) and produced similar delivery accuracy (p>0.05 at γ2%/2mm and γ3%/3mm). However, the VMAT plans took more planning time (28.9 min vs 7.7 min per cycle, p<0.0001) and required more data for a three-dimensional dose (20 times more, p<0.0001). Conclusions Nine-field step-and-shoot IMRT and DA-VMAT are both capable of meeting the majority of planning goals for locally advanced HNC. The main trade-offs between the techniques are shorter treatment time for DA-VMAT but longer planning time and the additional resources required for implementation of a new technology. Based on this study, our clinic has incorporated DA-VMAT for locally advanced HNC. Advances in knowledge DA-VMAT is a suitable alternative to IMRT for locally advanced HNC. PMID:22806619
Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut
2014-05-01
Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Trujillo, Francisco J; Eberhardt, Sebastian; Möller, Dirk; Dual, Jurg; Knoerzer, Kai
2013-03-01
A model was developed to determine the local changes of concentration of particles and the formations of bands induced by a standing acoustic wave field subjected to a sawtooth frequency ramping pattern. The mass transport equation was modified to incorporate the effect of acoustic forces on the concentration of particles. This was achieved by balancing the forces acting on particles. The frequency ramping was implemented as a parametric sweep for the time harmonic frequency response in time steps of 0.1s. The physics phenomena of piezoelectricity, acoustic fields and diffusion of particles were coupled and solved in COMSOL Multiphysics™ (COMSOL AB, Stockholm, Sweden) following a three step approach. The first step solves the governing partial differential equations describing the acoustic field by assuming that the pressure field achieves a pseudo steady state. In the second step, the acoustic radiation force is calculated from the pressure field. The final step allows calculating the locally changing concentration of particles as a function of time by solving the modified equation of particle transport. The diffusivity was calculated as function of concentration following the Garg and Ruthven equation which describes the steep increase of diffusivity when the concentration approaches saturation. However, it was found that this steep increase creates numerical instabilities at high voltages (in the piezoelectricity equations) and high initial particle concentration. The model was simplified to a pseudo one-dimensional case due to computation power limitations. The predicted particle distribution calculated with the model is in good agreement with the experimental data as it follows accurately the movement of the bands in the centre of the chamber. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Nonenzymatic Wearable Sensor for Electrochemical Analysis of Perspiration Glucose.
Zhu, Xiaofei; Ju, Yinhui; Chen, Jian; Liu, Deye; Liu, Hong
2018-05-25
We report a nonenzymatic wearable sensor for electrochemical analysis of perspiration glucose. Multipotential steps are applied on a Au electrode, including a high negative pretreatment potential step for proton reduction which produces a localized alkaline condition, a moderate potential step for electrocatalytic oxidation of glucose under the alkaline condition, and a positive potential step to clean and reactivate the electrode surface for the next detection. Fluorocarbon-based materials were coated on the Au electrode for improving the selectivity and robustness of the sensor. A fully integrated wristband is developed for continuous real-time monitoring of perspiration glucose during physical activities, and uploading the test result to a smartphone app via Bluetooth.
Multigrid for hypersonic viscous two- and three-dimensional flows
NASA Technical Reports Server (NTRS)
Turkel, E.; Swanson, R. C.; Vatsa, V. N.; White, J. A.
1991-01-01
The use of a multigrid method with central differencing to solve the Navier-Stokes equations for hypersonic flows is considered. The time dependent form of the equations is integrated with an explicit Runge-Kutta scheme accelerated by local time stepping and implicit residual smoothing. Variable coefficients are developed for the implicit process that removes the diffusion limit on the time step, producing significant improvement in convergence. A numerical dissipation formulation that provides good shock capturing capability for hypersonic flows is presented. This formulation is shown to be a crucial aspect of the multigrid method. Solutions are given for two-dimensional viscous flow over a NACA 0012 airfoil and three-dimensional flow over a blunt biconic.
A far-field non-reflecting boundary condition for two-dimensional wake flows
NASA Technical Reports Server (NTRS)
Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli
1995-01-01
Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.
Quadratic adaptive algorithm for solving cardiac action potential models.
Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing
2016-10-01
An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.
Local Government Solar Project Portal
The Local Government Solar Project Portal provides step-by-step guidance and resources to assist local governments in solar project development, including case studies, fact sheets, presentations, templates, and more.
DISPATCH: a numerical simulation framework for the exa-scale era - I. Fundamentals
NASA Astrophysics Data System (ADS)
Nordlund, Åke; Ramsey, Jon P.; Popovas, Andrius; Küffmeier, Michael
2018-06-01
We introduce a high-performance simulation framework that permits the semi-independent, task-based solution of sets of partial differential equations, typically manifesting as updates to a collection of `patches' in space-time. A hybrid MPI/OpenMP execution model is adopted, where work tasks are controlled by a rank-local `dispatcher' which selects, from a set of tasks generally much larger than the number of physical cores (or hardware threads), tasks that are ready for updating. The definition of a task can vary, for example, with some solving the equations of ideal magnetohydrodynamics (MHD), others non-ideal MHD, radiative transfer, or particle motion, and yet others applying particle-in-cell (PIC) methods. Tasks do not have to be grid based, while tasks that are, may use either Cartesian or orthogonal curvilinear meshes. Patches may be stationary or moving. Mesh refinement can be static or dynamic. A feature of decisive importance for the overall performance of the framework is that time-steps are determined and applied locally; this allows potentially large reductions in the total number of updates required in cases when the signal speed varies greatly across the computational domain, and therefore a corresponding reduction in computing time. Another feature is a load balancing algorithm that operates `locally' and aims to simultaneously minimize load and communication imbalance. The framework generally relies on already existing solvers, whose performance is augmented when run under the framework, due to more efficient cache usage, vectorization, local time-stepping, plus near-linear and, in principle, unlimited OpenMP and MPI scaling.
Souto, Leonardo A V; Castro, André; Gonçalves, Luiz Marcos Garcia; Nascimento, Tiago P
2017-08-08
Natural landmarks are the main features in the next step of the research in localization of mobile robot platforms. The identification and recognition of these landmarks are crucial to better localize a robot. To help solving this problem, this work proposes an approach for the identification and recognition of natural marks included in the environment using images from RGB-D (Red, Green, Blue, Depth) sensors. In the identification step, a structural analysis of the natural landmarks that are present in the environment is performed. The extraction of edge points of these landmarks is done using the 3D point cloud obtained from the RGB-D sensor. These edge points are smoothed through the S l 0 algorithm, which minimizes the standard deviation of the normals at each point. Then, the second step of the proposed algorithm begins, which is the proper recognition of the natural landmarks. This recognition step is done as a real-time algorithm that extracts the points referring to the filtered edges and determines to which structure they belong to in the current scenario: stairs or doors. Finally, the geometrical characteristics that are intrinsic to the doors and stairs are identified. The approach proposed here has been validated with real robot experiments. The performed tests verify the efficacy of our proposed approach.
Stability of discrete time recurrent neural networks and nonlinear optimization problems.
Singh, Jayant; Barabanov, Nikita
2016-02-01
We consider the method of Reduction of Dissipativity Domain to prove global Lyapunov stability of Discrete Time Recurrent Neural Networks. The standard and advanced criteria for Absolute Stability of these essentially nonlinear systems produce rather weak results. The method mentioned above is proved to be more powerful. It involves a multi-step procedure with maximization of special nonconvex functions over polytopes on every step. We derive conditions which guarantee an existence of at most one point of local maximum for such functions over every hyperplane. This nontrivial result is valid for wide range of neuron transfer functions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Self-propelled motion of Au-Si droplets on Si(111) mediated by monoatomic step dissolution
NASA Astrophysics Data System (ADS)
Curiotto, S.; Leroy, F.; Cheynis, F.; Müller, P.
2015-02-01
By Low Energy Electron Microscopy, we show that the spontaneous motion of gold droplets on silicon (111) is chemically driven: the droplets tend to dissolve silicon monoatomic steps to reach the temperature-dependent Au-Si equilibrium stoichiometry. According to the droplet size, the motion details are different. In the first stages of Au deposition small droplets nucleate at steps and move continuously on single terraces. The droplets temporarily pin at each step they meet during their motion. During pinning, the growing droplets become supersaturated in Au. They depin from the steps when a notch nucleate on the upper step. Then the droplets climb up and locally dissolve the Si steps, leaving behind them deep tracks formed by notched steps. Measurements of the dissolution rate and the displacement lengths enable us to describe quantitatively the motion mechanism, also in terms of anisotropy of Si dissolution kinetics. Scaling laws for the droplet position as a function of time are proposed: x ∝ tn with 1/3 < n < 2/3.
NASA Astrophysics Data System (ADS)
Semplice, Matteo; Loubère, Raphaël
2018-02-01
In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.
Local Foods, Local Places Toolkit
Toolkit to help communities that want to use local foods to spur revitalization. The toolkit gives step-by-step instructions to help communities plan and host a workshop and create an action plan to implement.
Three-dimensional unstructured grid Euler computations using a fully-implicit, upwind method
NASA Technical Reports Server (NTRS)
Whitaker, David L.
1993-01-01
A method has been developed to solve the Euler equations on a three-dimensional unstructured grid composed of tetrahedra. The method uses an upwind flow solver with a linearized, backward-Euler time integration scheme. Each time step results in a sparse linear system of equations which is solved by an iterative, sparse matrix solver. Local-time stepping, switched evolution relaxation (SER), preconditioning and reuse of the Jacobian are employed to accelerate the convergence rate. Implicit boundary conditions were found to be extremely important for fast convergence. Numerical experiments have shown that convergence rates comparable to that of a multigrid, central-difference scheme are achievable on the same mesh. Results are presented for several grids about an ONERA M6 wing.
NASA Astrophysics Data System (ADS)
Rangaswamy, T.; Nagaraja, R.
2018-04-01
The Study focused on design and development of solid carbide step drill K34 to drill holes on composite materials such as Carbon Fiber Reinforced Plastic (CFRP) and Glass Fiber Reinforced Plastic (GFRP). The step drill K34 replaces step wise drilling of diameter 6.5mm and 9 mm holes that reduces the setup time, cutting speed, feed rate cost, delamination and increase the production rate. Several researchers have analyzed the effect of drilling process on various fiber reinforced plastic composites by carrying out using conventional tools and machinery. However, this process operation can lead to different kind of damages such as delamination, fiber pullout, and local cracks. To avoid the problems encountered at the time of drilling, suitable tool material and geometry is essential. This paper deals with the design and development of K34 Carbide step drill used to drill holes on CFRP and GFRP laminates. An Experimental study carried out to investigate the tool geometry, feed rate and cutting speed that avoids delamination and fiber breakage.
A Method for Response Time Measurement of Electrosensitive Protective Devices.
Dźwiarek, Marek
1996-01-01
A great step toward the improvement of safety at work was made when electrosensitive protective devices (ESPDs) were applied to the protection of press and robot-assisted manufacturing system operators. The way the device is mounted is crucial. The parameters of ESPD mounting that ensure safe distance from the controlled dangerous zone are response time, sensitivity, and the dimensions of the detection zone. The proposed experimental procedure of response time measurement is realized in two steps, with a test piece penetrating the detection zone twice. In the first step, low-speed penetration (at a speed v m ) enables the detection zone border to be localized. In the second step of measurement, the probe is injected at a high speed V d . The actuator rod position is measured and when it is equal to the value L registered by the earlier measurements, counting time begins as well as the monitoring of the state of the equipment under test (EUT) output relays. After the state changes, time tp is registered. The experimental procedure is realized on a special experimental stand. Because the stand has been constructed for certification purposes, the design satisfies the requirements imposed by Polski Komitet Normalizacyjny (PKN, 1995). The experimental results prove the measurement error to be smaller than ± 0.6 ms.
Influence of numerical dissipation in computing supersonic vortex-dominated flows
NASA Technical Reports Server (NTRS)
Kandil, O. A.; Chuang, A.
1986-01-01
Steady supersonic vortex-dominated flows are solved using the unsteady Euler equations for conical and three-dimensional flows around sharp- and round-edged delta wings. The computational method is a finite-volume scheme which uses a four-stage Runge-Kutta time stepping with explicit second- and fourth-order dissipation terms. The grid is generated by a modified Joukowski transformation. The steady flow solution is obtained through time-stepping with initial conditions corresponding to the freestream conditions, and the bow shock is captured as a part of the solution. The scheme is applied to flat-plate and elliptic-section wings with a leading edge sweep of 70 deg at an angle of attack of 10 deg and a freestream Mach number of 2.0. Three grid sizes of 29 x 39, 65 x 65 and 100 x 100 have been used. The results for sharp-edged wings show that they are consistent with all grid sizes and variation of the artificial viscosity coefficients. The results for round-edged wings show that separated and attached flow solutions can be obtained by varying the artificial viscosity coefficients. They also show that the solutions are independent of the way time stepping is done. Local time-stepping and global minimum time-steeping produce same solutions.
NASA Astrophysics Data System (ADS)
Wang, Shuangyi; Housden, James; Singh, Davinder; Rhode, Kawal
2017-12-01
3D trans-oesophageal echocardiography (TOE) has become a powerful tool for monitoring intra-operative catheters used during cardiac procedures in recent years. However, the control of the TOE probe remains as a manual task and therefore the operator has to hold the probe for a long period of time and sometimes in a radiation environment. To solve this problem, an add-on robotic system has been developed for holding and manipulating a commercial TOE probe. This paper focuses on the application of making automatic adjustments to the probe pose in order to accurately monitor the moving catheters. The positioning strategy is divided into an initialization step based on a pre-planning method and a localized adjustments step based on the robotic differential kinematics and related image servoing techniques. Both steps are described in the paper along with simulation experiments performed to validate the concept. The results indicate an error less than 0.5 mm for the initialization step and an error less than 2 mm for the localized adjustments step. Compared to the much bigger live 3D image volume, it is concluded that the methods are promising. Future work will focus on evaluating the method in the real TOE scanning scenario.
Nuclear fusion during yeast mating occurs by a three-step pathway.
Melloy, Patricia; Shen, Shu; White, Erin; McIntosh, J Richard; Rose, Mark D
2007-11-19
In Saccharomyces cerevisiae, mating culminates in nuclear fusion to produce a diploid zygote. Two models for nuclear fusion have been proposed: a one-step model in which the outer and inner nuclear membranes and the spindle pole bodies (SPBs) fuse simultaneously and a three-step model in which the three events occur separately. To differentiate between these models, we used electron tomography and time-lapse light microscopy of early stage wild-type zygotes. We observe two distinct SPBs in approximately 80% of zygotes that contain fused nuclei, whereas we only see fused or partially fused SPBs in zygotes in which the site of nuclear envelope (NE) fusion is already dilated. This demonstrates that SPB fusion occurs after NE fusion. Time-lapse microscopy of zygotes containing fluorescent protein tags that localize to either the NE lumen or the nucleoplasm demonstrates that outer membrane fusion precedes inner membrane fusion. We conclude that nuclear fusion occurs by a three-step pathway.
2013-03-01
annual targets between fiscal years 2008 and 2011 for the number of individuals in Yemen benefiting from food donations. However, reports to Congress...annual performance targets three times for the number of individuals in Yemen benefiting from food donations, reports to Congress about the program...security to several hundred locally employed staff. However, the embassy has deemed other steps proposed by locally employed staff, including telework and
Jansen, Sophia C; Haveman-Nies, Annemien; Duijzer, Geerke; Ter Beek, Josien; Hiddink, Gerrit J; Feskens, Edith J M
2013-05-08
Although many evidence-based diabetes prevention interventions exist, they are not easily applicable in real-life settings. Moreover, there is a lack of examples which describe the adaptation process of these interventions to practice. In this paper we present an example of such an adaptation. We adapted the SLIM (Study on Lifestyle intervention and Impaired glucose tolerance Maastricht) diabetes prevention intervention to a Dutch real-life setting, in a joint decision making process of intervention developers and local health care professionals. We used 3 adaptation steps in accordance with current adaptation frameworks. In the first step, the elements of the SLIM intervention were identified. In the second step, these elements were judged for their applicability in a real-life setting. In the third step, adaptations were proposed and discussed for those elements which were deemed not applicable. Participants invited for this process included intervention developers and local health care professionals (n=19). In the first adaptation step, a total of 22 intervention elements were identified. In the second step, 12 of these 22 intervention elements were judged as inapplicable. In the third step, a consensus was achieved for the adaptations of all 12 elements. The adapted elements were in the following categories: target population, techniques, intensity, delivery mode, materials, organisational structure, and political and financial conditions. The adaptations either lay in changing the SLIM protocol (6 elements) or the real-life working procedures (1 element), or a combination of both (4 elements). The positive result of this study is that a consensus was achieved within a relatively short time period (nine months) between the developers of the SLIM intervention and local health care professionals on the adaptations needed to make SLIM applicable in a Dutch real-life setting. Our example shows that it is possible to combine the perspectives of scientists and practitioners, and to find a balance between evidence-base and applicability concerns.
Real-Time Visualization of an HPF-based CFD Simulation
NASA Technical Reports Server (NTRS)
Kremenetsky, Mark; Vaziri, Arsi; Haimes, Robert; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Current time-dependent CFD simulations produce very large multi-dimensional data sets at each time step. The visual analysis of computational results are traditionally performed by post processing the static data on graphics workstations. We present results from an alternate approach in which we analyze the simulation data in situ on each processing node at the time of simulation. The locally analyzed results, usually more economical and in a reduced form, are then combined and sent back for visualization on a graphics workstation.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
Encounter times of chromatin loci influenced by polymer decondensation
NASA Astrophysics Data System (ADS)
Amitai, A.; Holcman, D.
2018-03-01
The time for a DNA sequence to find its homologous counterpart depends on a long random search inside the cell nucleus. Using polymer models, we compute here the mean first encounter time (MFET) between two sites located on two different polymer chains and confined locally by potential wells. We find that reducing tethering forces acting on the polymers results in local decondensation, and numerical simulations of the polymer model show that these changes are associated with a reduction of the MFET by several orders of magnitude. We derive here new asymptotic formula for the MFET, confirmed by Brownian simulations. We conclude from the present modeling approach that the fast search for homology is mediated by a local chromatin decondensation due to the release of multiple chromatin tethering forces. The present scenario could explain how the homologous recombination pathway for double-stranded DNA repair is controlled by its random search step.
The TTSD in conjunction with a multi-agency Chesapeake Bay Project team, has developed this handbook to provide state and local governments and others "How-to" steps needed to design, employ, and maintain water quality monitoring, data management/delivery, and communications syst...
QUICR-learning for Multi-Agent Coordination
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2006-01-01
Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.
Improvements in brain activation detection using time-resolved diffuse optical means
NASA Astrophysics Data System (ADS)
Montcel, Bruno; Chabrier, Renee; Poulet, Patrick
2005-08-01
An experimental method based on time-resolved absorbance difference is described. The absorbance difference is calculated over each temporal step of the optical signal with the time-resolved Beer-Lambert law. Finite element simulations show that each step corresponds to a different scanned zone and that cerebral contribution increases with the arrival time of photons. Experiments are conducted at 690 and 830 nm with a time-resolved system consisting of picosecond laser diodes, micro-channel plate photo-multiplier tube and photon counting modules. The hemodynamic response to a short finger tapping stimulus is measured over the motor cortex. Time-resolved absorbance difference maps show that variations in the optical signals are not localized in superficial regions of the head, which testify for their cerebral origin. Furthermore improvements in the detection of cerebral activation is achieved through the increase of variations in absorbance by a factor of almost 5 for time-resolved measurements as compared to non-time-resolved measurements.
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
NASA Astrophysics Data System (ADS)
Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham
2010-03-01
In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaikh, Zubair; Bhaskar, Ankush; Raghav, Anil, E-mail: raghavanil1984@gmail.com
The transient interplanetary disturbances evoke short-time cosmic-ray flux decrease, which is known as Forbush decrease. The traditional model and understanding of Forbush decrease suggest that the sub-structure of an interplanetary counterpart of coronal mass ejection (ICME) independently contributes to cosmic-ray flux decrease. These sub-structures, shock-sheath, and magnetic cloud (MC) manifest as classical two-step Forbush decrease. The recent work by Raghav et al. has shown multi-step decreases and recoveries within the shock-sheath. However, this cannot be explained by the ideal shock-sheath barrier model. Furthermore, they suggested that local structures within the ICME’s sub-structure (MC and shock-sheath) could explain this deviation ofmore » the FD profile from the classical FD. Therefore, the present study attempts to investigate the cause of multi-step cosmic-ray flux decrease and respective recovery within the shock-sheath in detail. A 3D-hodogram method is utilized to obtain more details regarding the local structures within the shock-sheath. This method unambiguously suggests the formation of small-scale local structures within the ICME (shock-sheath and even in MC). Moreover, the method could differentiate the turbulent and ordered interplanetary magnetic field (IMF) regions within the sub-structures of ICME. The study explicitly suggests that the turbulent and ordered IMF regions within the shock-sheath do influence cosmic-ray variations differently.« less
Improving safety on rural local and tribal roads site safety analysis - user guide #1.
DOT National Transportation Integrated Search
2014-08-01
This User Guide presents an example of how rural local and Tribal practitioners can study conditions at a preselected site. It demonstrates the step-by-step safety analysis process presented in Improving Safety on Rural Local and Tribal Roads Saf...
Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.
Rangan, Aaditya V; Cai, David
2007-02-01
We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.
NASA Astrophysics Data System (ADS)
Kolokolov, Yury; Monovskaya, Anna
2016-06-01
The paper continues the application of the bifurcation analysis in the research on local climate dynamics based on processing the historically observed data on the daily average land surface air temperature. Since the analyzed data are from instrumental measurements, we are doing the experimental bifurcation analysis. In particular, we focus on the discussion where is the joint between the normal dynamics of local climate systems (norms) and situations with the potential to create damages (hazards)? We illustrate that, perhaps, the criteria for hazards (or violent and unfavorable weather factors) relate mainly to empirical considerations from human opinion, but not to the natural qualitative changes of climate dynamics. To build the bifurcation diagrams, we base on the unconventional conceptual model (HDS-model) which originates from the hysteresis regulator with double synchronization. The HDS-model is characterized by a variable structure with the competition between the amplitude quantization and the time quantization. Then the intermittency between three periodical processes is considered as the typical behavior of local climate systems instead of both chaos and quasi-periodicity in order to excuse the variety of local climate dynamics. From the known specific regularities of the HDS-model dynamics, we try to find a way to decompose the local behaviors into homogeneous units within the time sections with homogeneous dynamics. Here, we present the first results of such decomposition, where the quasi-homogeneous sections (QHS) are determined on the basis of the modified bifurcation diagrams, and the units are reconstructed within the limits connected with the problem of shape defects. Nevertheless, the proposed analysis of the local climate dynamics (QHS-analysis) allows to exhibit how the comparatively modest temperature differences between the mentioned units in an annual scale can step-by-step expand into the great temperature differences of the daily variability at a centennial scale. Then the norms and the hazards relate to the fundamentally different viewpoints, where the time sections of months and, especially, seasons distort the causal effects of natural dynamical processes. The specific circumstances to realize the qualitative changes of the local climate dynamics are summarized by the notion of a likely periodicity. That, in particular, allows to explain why 30-year averaging remains the most common rule so far, but the decadal averaging begins to substitute that rule. We believe that the QHS-analysis can be considered as the joint between the norms and the hazards from a bifurcation analysis viewpoint, where the causal effects of the local climate dynamics are projected into the customary timescale only at the last step. We believe that the results could be interesting to develop the fields connected with climatic change and risk assessment.
Forecasting seasonal outbreaks of influenza.
Shaman, Jeffrey; Karspeck, Alicia
2012-12-11
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003-2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza.
Forecasting seasonal outbreaks of influenza
Shaman, Jeffrey; Karspeck, Alicia
2012-01-01
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003–2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza. PMID:23184969
Four Steps to Being a Local Advocate for Physical Education
ERIC Educational Resources Information Center
Richards, K. Andrew R.
2015-01-01
Although advocacy can also take place at the state and national levels, the foundation of advocacy is high-quality teaching and local initiatives. The purpose of this article is to review four steps that can be taken by PE teachers who are interested in engaging in local advocacy efforts.
NASA Astrophysics Data System (ADS)
Nguyen, Dam Thuy Trang; Tong, Quang Cong; Ledoux-Rak, Isabelle; Lai, Ngoc Diep
2016-01-01
In this work, local thermal effect induced by a continuous-wave laser has been investigated and exploited to optimize the low one-photon absorption (LOPA) direct laser writing (DLW) technique for fabrication of polymer-based microstructures. It was demonstrated that the temperature of excited SU8 photoresist at the focusing area increases to above 100 °C due to high excitation intensity and becomes stable at that temperature thanks to the use of a continuous-wave laser at 532 nm-wavelength. This optically induced thermal effect immediately completes the crosslinking process at the photopolymerized region, allowing obtain desired structures without using the conventional post-exposure bake (PEB) step, which is usually realized after the exposure. Theoretical calculation of the temperature distribution induced by local optical excitation using finite element method confirmed the experimental results. LOPA-based DLW technique combined with optically induced thermal effect (local PEB) shows great advantages over the traditional PEB, such as simple, short fabrication time, high resolution. In particular, it allowed the overcoming of the accumulation effect inherently existed in optical lithography by one-photon absorption process, resulting in small and uniform structures with very short lattice constant.
1990-06-01
synchronization . We consider the performance of various synchronization protocols by deriving upper and lower bounds on optimal perfor- mance, upper bounds on Time ...from universities and from industry, who have resident appointments for limited periods of time , and by consultants. Members of NASA’s research staff...convergence to steady state is also being studied together with D. Gottlieb. The idea is to generalize the concept of local- time stepping by minimizing the
Metascalable molecular dynamics simulation of nano-mechano-chemistry
NASA Astrophysics Data System (ADS)
Shimojo, F.; Kalia, R. K.; Nakano, A.; Nomura, K.; Vashishta, P.
2008-07-01
We have developed a metascalable (or 'design once, scale on new architectures') parallel application-development framework for first-principles based simulations of nano-mechano-chemical processes on emerging petaflops architectures based on spatiotemporal data locality principles. The framework consists of (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms, (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these scalable algorithms onto hardware. The EDC-STEP-HCD framework exposes and expresses maximal concurrency and data locality, thereby achieving parallel efficiency as high as 0.99 for 1.59-billion-atom reactive force field molecular dynamics (MD) and 17.7-million-atom (1.56 trillion electronic degrees of freedom) quantum mechanical (QM) MD in the framework of the density functional theory (DFT) on adaptive multigrids, in addition to 201-billion-atom nonreactive MD, on 196 608 IBM BlueGene/L processors. We have also used the framework for automated execution of adaptive hybrid DFT/MD simulation on a grid of six supercomputers in the US and Japan, in which the number of processors changed dynamically on demand and tasks were migrated according to unexpected faults. The paper presents the application of the framework to the study of nanoenergetic materials: (1) combustion of an Al/Fe2O3 thermite and (2) shock initiation and reactive nanojets at a void in an energetic crystal.
Image Description with Local Patterns: An Application to Face Recognition
NASA Astrophysics Data System (ADS)
Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro
In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.
Focal cryotherapy: step by step technique description
Redondo, Cristina; Srougi, Victor; da Costa, José Batista; Baghdad, Mohammed; Velilla, Guillermo; Nunes-Silva, Igor; Bergerat, Sebastien; Garcia-Barreras, Silvia; Rozet, François; Ingels, Alexandre; Galiano, Marc; Sanchez-Salas, Rafael; Barret, Eric; Cathelineau, Xavier
2017-01-01
ABSTRACT Introduction and objective: Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa). The purpose of this video is to describe the procedure step by step. Materials and methods: We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. Results: The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipment utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40°C) to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1–5). Conclusions: Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment. PMID:28727387
Focal cryotherapy: step by step technique description.
Redondo, Cristina; Srougi, Victor; da Costa, José Batista; Baghdad, Mohammed; Velilla, Guillermo; Nunes-Silva, Igor; Bergerat, Sebastien; Garcia-Barreras, Silvia; Rozet, François; Ingels, Alexandre; Galiano, Marc; Sanchez-Salas, Rafael; Barret, Eric; Cathelineau, Xavier
2017-01-01
Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa). The purpose of this video is to describe the procedure step by step. We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipament utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40ºC) to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1-5). Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment. Copyright® by the International Brazilian Journal of Urology.
Ookinete-induced midgut peroxidases detonate the time bomb in anopheline mosquitoes.
Kumar, Sanjeev; Barillas-Mury, Carolina
2005-07-01
Previous analysis of the temporal-spatial relationship between ookinete migration and the cellular localization of genes mediating midgut immune defense responses suggested that, in order to survive, parasites must complete invasion before toxic chemicals ("a bomb") are generated by the invaded cell. Recent studies indicate that ookinete invasion induces tyrosine nitration as a two-step reaction, in which NOS induction is followed by a localized increase in peroxidase activity. Peroxidases utilize nitrite and hydrogen peroxide as substrates, and detonate the time bomb by generating reactive nitrogen intermediates, such as nitrogen dioxide, which mediate nitration. There is evidence that peroxidases also mediate antimicrobial responses to bacteria, fungi and parasites in a broad range of biological systems including humans and plants. Defense reactions that generate toxic chemicals are also potentially harmful to the host mounting the response and often results in apoptosis. The two-step nitration pathway is probably an ancient response, as it has also been described in vertebrate leukocytes and probably evolved as a mechanism to circumscribe the toxic products generated during defense responses involving protein nitration.
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
NASA Astrophysics Data System (ADS)
Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.
2014-04-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
A semi-Lagrangian approach to the shallow water equation
NASA Technical Reports Server (NTRS)
Bates, J. R.; Mccormick, Stephen F.; Ruge, John; Sholl, David S.; Yavneh, Irad
1993-01-01
We present a formulation of the shallow water equations that emphasizes the conservation of potential vorticity. A locally conservative semi-Lagrangian time-stepping scheme is developed, which leads to a system of three coupled PDE's to be solved at each time level. We describe a smoothing analysis of these equations, on which an effective multigrid solver is constructed. Some results from applying this solver to the static version of these equations are presented.
Real-Time Maps of Fluid Flow Fields in Porous Biomaterials
Mack, Julia J.; Youssef, Khalid; Noel, Onika D.V.; Lake, Michael P.; Wu, Ashley; Iruela-Arispe, M. Luisa; Bouchard, Louis-S.
2013-01-01
Mechanical forces such as fluid shear have been shown to enhance cell growth and differentiation, but knowledge of their mechanistic effect on cells is limited because the local flow patterns and associated metrics are not precisely known. Here we present real-time, noninvasive measures of local hydrodynamics in 3D biomaterials based on nuclear magnetic resonance. Microflow maps were further used to derive pressure, shear and fluid permeability fields. Finally, remodeling of collagen gels in response to precise fluid flow parameters was correlated with structural changes. It is anticipated that accurate flow maps within 3D matrices will be a critical step towards understanding cell behavior in response to controlled flow dynamics. PMID:23245922
Aparna, Deshpande; Kumar, Sunil; Kamalkumar, Shukla
2017-10-27
To determine percentage of patients of necrotizing pancreatitis (NP) requiring intervention and the types of interventions performed. Outcomes of patients of step up necrosectomy to those of direct necrosectomy were compared. Operative mortality, overall mortality, morbidity and overall length of stay were determined. After institutional ethics committee clearance and waiver of consent, records of patients of pancreatitis were reviewed. After excluding patients as per criteria, epidemiologic and clinical data of patients of NP was noted. Treatment protocol was reviewed. Data of patients in whom step-up approach was used was compared to those in whom it was not used. A total of 41 interventions were required in 39% patients. About 60% interventions targeted the pancreatic necrosis while the rest were required to deal with the complications of the necrosis. Image guided percutaneous catheter drainage was done in 9 patients for infected necrosis all of whom required further necrosectomy and in 3 patients with sterile necrosis. Direct retroperitoneal or anterior necrosectomy was performed in 15 patients. The average time to first intervention was 19.6 d in the non step-up group (range 11-36) vs 18.22 d in the Step-up group (range 13-25). The average hospital stay in non step-up group was 33.3 d vs 38 d in step up group. The mortality in the step-up group was 0% (0/9) vs 13% (2/15) in the non step up group. Overall mortality was 10.3% while post-operative mortality was 8.3%. Average hospital stay was 22.25 d. Early conservative management plays an important role in management of NP. In patients who require intervention, the approach used and the timing of intervention should be based upon the clinical condition and local expertise available. Delaying intervention and use of minimal invasive means when intervention is necessary is desirable. The step-up approach should be used whenever possible. Even when the classical retroperitoneal catheter drainage is not feasible, there should be an attempt to follow principles of step-up technique to buy time. The outcome of patients in the step-up group compared to the non step-up group is comparable in our series. Interventions for bowel diversion, bypass and hemorrhage control should be done at the appropriate times.
NASA Astrophysics Data System (ADS)
Le, Thien-Phu
2017-10-01
The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.
Zhang, Yun; Liu, Fang; Nie, Jinfang; Jiang, Fuyang; Zhou, Caibin; Yang, Jiani; Fan, Jinlong; Li, Jianping
2014-05-07
In this paper, we report for the first time an electrochemical biosensor for single-step, reagentless, and picomolar detection of a sequence-specific DNA-binding protein using a double-stranded, electrode-bound DNA probe terminally modified with a redox active label close to the electrode surface. This new methodology is based upon local repression of electrolyte diffusion associated with protein-DNA binding that leads to reduction of the electrochemical response of the label. In the proof-of-concept study, the resulting electrochemical biosensor was quantitatively sensitive to the concentrations of the TATA binding protein (TBP, a model analyte) ranging from 40 pM to 25.4 nM with an estimated detection limit of ∼10.6 pM (∼80 to 400-fold improvement on the detection limit over previous electrochemical analytical systems).
NASA Astrophysics Data System (ADS)
Ali, Riyaz Ahmad Mohamed; Villariza Espulgar, Wilfred; Aoki, Wataru; Jiang, Shu; Saito, Masato; Ueda, Mitsuyoshi; Tamiya, Eiichi
2018-03-01
Nanoplasmonic biosensors show high potentials as label-free devices for continuous monitoring in biomolecular analyses. However, most current sensors comprise multiple-dedicated layers with complicated fabrication procedures, which increases production time and manufacturing costs. In this work, we report the synergistic integration of cell-trapping microwell structures with plasmonic sensing nanopillar structures in a single-layered substrate by one-step thermal nanoimprinting. Here, microwell arrays are used for isolating cells, wherein gold-capped nanostructures sense changes in local refractive index via localized surface plasmon resonance (LSPR). Hence, proteins secreted from trapped cells can be label-freely detected as peak shifts in absorbance spectra. The fabricated device showed a detection limit of 10 ng/µL anti-IgA. In Pichia pastoris cells trial analysis, a red shift of 6.9 nm was observed over 12 h, which is likely due to the protein secretion from the cells. This approach provides an inexpensive, rapid, and reproducible alternative for mass production of biosensors for continuous biomolecular analyses.
On the computational aspects of comminution in discrete element method
NASA Astrophysics Data System (ADS)
Chaudry, Mohsin Ali; Wriggers, Peter
2018-04-01
In this paper, computational aspects of crushing/comminution of granular materials are addressed. For crushing, maximum tensile stress-based criterion is used. Crushing model in discrete element method (DEM) is prone to problems of mass conservation and reduction in critical time step. The first problem is addressed by using an iterative scheme which, depending on geometric voids, recovers mass of a particle. In addition, a global-local framework for DEM problem is proposed which tends to alleviate the local unstable motion of particles and increases the computational efficiency.
In situ nanoscale observations of gypsum dissolution by digital holographic microscopy.
Feng, Pan; Brand, Alexander S; Chen, Lei; Bullard, Jeffrey W
2017-06-01
Recent topography measurements of gypsum dissolution have not reported the absolute dissolution rates, but instead focus on the rates of formation and growth of etch pits. In this study, the in situ absolute retreat rates of gypsum (010) cleavage surfaces at etch pits, at cleavage steps, and at apparently defect-free portions of the surface are measured in flowing water by reflection digital holographic microscopy. Observations made on randomly sampled fields of view on seven different cleavage surfaces reveal a range of local dissolution rates, the local rate being determined by the topographical features at which material is removed. Four characteristic types of topographical activity are observed: 1) smooth regions, free of etch pits or other noticeable defects, where dissolution rates are relatively low; 2) shallow, wide etch pits bounded by faceted walls which grow gradually at rates somewhat greater than in smooth regions; 3) narrow, deep etch pits which form and grow throughout the observation period at rates that exceed those at the shallow etch pits; and 4) relatively few, submicrometer cleavage steps which move in a wave-like manner and yield local dissolution fluxes that are about five times greater than at etch pits. Molar dissolution rates at all topographical features except submicrometer steps can be aggregated into a continuous, mildly bimodal distribution with a mean of 3.0 µmolm -2 s -1 and a standard deviation of 0.7 µmolm -2 s -1 .
Local repair of stoma prolapse: Case report of an in vivo application of linear stapler devices.
Monette, Margaret M; Harney, Rodney T; Morris, Melanie S; Chu, Daniel I
2016-11-01
One of the most common late complications following stoma construction is prolapse. Although the majority of prolapse can be managed conservatively, surgical revision is required with incarceration/strangulation and in certain cases laparotomy and/or stoma reversal are not appropriate. This report will inform surgeons on safe and effective approaches to revising prolapsed stomas using local techniques. A 58 year old female with an obstructing rectal cancer previously received a diverting transverse loop colostomy. On completion of neoadjuvant treatment, re-staging found new lung metastases. She was scheduled for further chemotherapy but incarcerated a prolapsed segment of her loop colostomy. As there was no plan to resect her primary rectal tumor at the time, a local revision was preferred. Linear staplers were applied to the prolapsed stoma in step-wise fashion to locally revise the incarcerated prolapse. Post-operative recovery was satisfactory with no complications or recurrence of prolapse. We detail in step-wise fashion a technique using linear stapler devices that can be used to locally revise prolapsed stoma segments and therefore avoid a laparotomy. The procedure is technically easy to perform with satisfactory post-operative outcomes. We additionally review all previous reports of local repairs and show the evolution of local prolapse repair to the currently reported technique. This report offers surgeons an alternative, efficient and effective option for addressing the complications of stoma prolapse. While future studies are needed to assess long-term outcomes, in the short-term, our report confirms the safety and effectiveness of this local technique.
Localization of magnetic pills
Laulicht, Bryan; Gidmark, Nicholas J.; Tripathi, Anubhav; Mathiowitz, Edith
2011-01-01
Numerous therapeutics demonstrate optimal absorption or activity at specific sites in the gastrointestinal (GI) tract. Yet, safe, effective pill retention within a desired region of the GI remains an elusive goal. We report a safe, effective method for localizing magnetic pills. To ensure safety and efficacy, we monitor and regulate attractive forces between a magnetic pill and an external magnet, while visualizing internal dose motion in real time using biplanar videofluoroscopy. Real-time monitoring yields direct visual confirmation of localization completely noninvasively, providing a platform for investigating the therapeutic benefits imparted by localized oral delivery of new and existing drugs. Additionally, we report the in vitro measurements and calculations that enabled prediction of successful magnetic localization in the rat small intestines for 12 h. The designed system for predicting and achieving successful magnetic localization can readily be applied to any area of the GI tract within any species, including humans. The described system represents a significant step forward in the ability to localize magnetic pills safely and effectively anywhere within the GI tract. What our magnetic pill localization strategy adds to the state of the art, if used as an oral drug delivery system, is the ability to monitor the force exerted by the pill on the tissue and to locate the magnetic pill within the test subject all in real time. This advance ensures both safety and efficacy of magnetic localization during the potential oral administration of any magnetic pill-based delivery system. PMID:21257903
Effects of Imperfect Dynamic Clamp: Computational and Experimental Results
Bettencourt, Jonathan C.; Lillis, Kyle P.; White, John A.
2008-01-01
In the dynamic clamp technique, a typically nonlinear feedback system delivers electrical current to an excitable cell that represents the actions of “virtual” ion channels (e.g., channels that are gated by local membrane potential or by electrical activity in neighboring biological or virtual neurons). Since the conception of this technique, there have been a number of different implementations of dynamic clamp systems, each with differing levels of flexibility and performance. Embedded hardware-based systems typically offer feedback that is very fast and precisely timed, but these systems are often expensive and sometimes inflexible. PC-based systems, on the other hand, allow the user to write software that defines an arbitrarily complex feedback system, but real-time performance in PC-based systems can be deteriorated by imperfect real-time performance. Here we systematically evaluate the performance requirements for artificial dynamic clamp knock-in of transient sodium and delayed rectifier potassium conductances. Specifically we examine the effects of controller time step duration, differential equation integration method, jitter (variability in time step), and latency (the time lag from reading inputs to updating outputs). Each of these control system flaws is artificially introduced in both simulated and real dynamic clamp experiments. We demonstrate that each of these errors affect dynamic clamp accuracy in a way that depends on the time constants and stiffness of the differential equations being solved. In simulations, time steps above 0.2 ms lead to catastrophic alteration of spike shape, but the frequency-vs.-current relationship is much more robust. Latency (the part of the time step that occurs between measuring membrane potential and injecting re-calculated membrane current) is a crucial factor as well. Experimental data are substantially more sensitive to inaccuracies than simulated data. PMID:18076999
a Weighted Local-World Evolving Network Model Based on the Edge Weights Preferential Selection
NASA Astrophysics Data System (ADS)
Li, Ping; Zhao, Qingzhen; Wang, Haitang
2013-05-01
In this paper, we use the edge weights preferential attachment mechanism to build a new local-world evolutionary model for weighted networks. It is different from previous papers that the local-world of our model consists of edges instead of nodes. Each time step, we connect a new node to two existing nodes in the local-world through the edge weights preferential selection. Theoretical analysis and numerical simulations show that the scale of the local-world affect on the weight distribution, the strength distribution and the degree distribution. We give the simulations about the clustering coefficient and the dynamics of infectious diseases spreading. The weight dynamics of our network model can portray the structure of realistic networks such as neural network of the nematode C. elegans and Online Social Network.
NASA Astrophysics Data System (ADS)
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
NASA Astrophysics Data System (ADS)
Erdt, Marius; Sakas, Georgios
2010-03-01
This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed Tomography (CT). The developed computer aided segmentation system is expected to support computer aided diagnosis and operation planning. We have developed a deformable model based approach based on local shape constraints that prevents the model from deforming into neighboring structures while allowing the global shape to adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies, the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for an application in clinical practice.
Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2016-06-01
This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.
Renormalized Hamiltonian for a peptide chain: Digitalizing the protein folding problem
NASA Astrophysics Data System (ADS)
Fernández, Ariel; Colubri, Andrés
2000-05-01
A renormalized Hamiltonian for a flexible peptide chain is derived to generate the long-time limit dynamics compatible with a coarsening of torsional conformation space. The renormalization procedure is tailored taking into account the coarse graining imposed by the backbone torsional constraints due to the local steric hindrance and the local backbone-side-group interactions. Thus, the torsional degrees of freedom for each residue are resolved modulo basins of attraction in its so-called Ramachandran map. This Ramachandran renormalization (RR) procedure is implemented so that the chain is energetically driven to form contact patterns as their respective collective topological constraints are fulfilled within the coarse description. In this way, the torsional dynamics are digitalized and become codified as an evolving pattern in a binary matrix. Each accepted Monte Carlo step in a canonical ensemble simulation is correlated with the real mean first passage time it takes to reach the destination coarse topological state. This real-time correlation enables us to test the RR dynamics by comparison with experimentally probed kinetic bottlenecks along the dominant folding pathway. Such intermediates are scarcely populated at any given time, but they determine the kinetic funnel leading to the active structure. This landscape region is reached through kinetically controlled steps needed to overcome the conformational entropy of the random coil. The results are specialized for the bovine pancreatic trypsin inhibitor, corroborating the validity of our method.
Nnodim, Joseph O; Strasburg, Debra; Nabozny, Martina; Nyquist, Linda; Galecki, Andrzej; Chen, Shu; Alexander, Neil B
2006-12-01
To compare the effect of two 10-week balance training programs, Combined Balance and Step Training (CBST) versus tai chi (TC), on balance and stepping measures. Prospective intervention trial. Local senior centers and congregate housing facilities. Aged 65 and older with at least mild impairment in the ability to perform unipedal stance and tandem walk. Participants were allocated to TC (n = 107, mean age 78) or CBST, an intervention focused on improving dynamic balance and stepping (n = 106, mean age 78). At baseline and 10 weeks, participants were tested in their static balance (Unipedal Stance and Tandem Stance (TS)), stepping (Maximum Step Length, Rapid Step Test), and Timed Up and Go (TUG). Performance improved more with CBST than TC, ranging from 5% to 10% for the stepping tests (Maximum Step Length and Rapid Step Test) and 9% for TUG. The improvement in TUG represented an improvement of more than 1 second. Greater improvements were also seen in static balance ability (in TS) with CBST than TC. Of the two training programs, in which variants of each program have been proven to reduce falls, CBST results in modest improvements in balance, stepping, and functional mobility versus TC over a 10-week period. Future research should include a prospective comparison of fall rates in response to these two balance training programs.
Bastian, Mikaël; Sackur, Jérôme
2013-01-01
Research from the last decade has successfully used two kinds of thought reports in order to assess whether the mind is wandering: random thought-probes and spontaneous reports. However, none of these two methods allows any assessment of the subjective state of the participant between two reports. In this paper, we present a step by step elaboration and testing of a continuous index, based on response time variability within Sustained Attention to Response Tasks (N = 106, for a total of 10 conditions). We first show that increased response time variability predicts mind wandering. We then compute a continuous index of response time variability throughout full experiments and show that the temporal position of a probe relative to the nearest local peak of the continuous index is predictive of mind wandering. This suggests that our index carries information about the subjective state of the subject even when he or she is not probed, and opens the way for on-line tracking of mind wandering. Finally we proceed a step further and infer the internal attentional states on the basis of the variability of response times. To this end we use the Hidden Markov Model framework, which allows us to estimate the durations of on-task and off-task episodes. PMID:24046753
Quantization improves stabilization of dynamical systems with delayed feedback
NASA Astrophysics Data System (ADS)
Stepan, Gabor; Milton, John G.; Insperger, Tamas
2017-11-01
We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.
Local symmetries and order-disorder transitions in small macroscopic Wigner islands.
Coupier, Gwennou; Guthmann, Claudine; Noat, Yves; Jean, Michel Saint
2005-04-01
The influence of local order on the disordering scenario of small Wigner islands is discussed. A first disordering step is put in evidence by the time correlation functions and is linked to individual excitations resulting in configuration transitions, which are very sensitive to the local symmetries. This is followed by two other transitions, corresponding to orthoradial and radial diffusion, for which both individual and collective excitations play a significant role. Finally, we show that, contrary to large systems, the focus that is commonly made on collective excitations for such small systems through the Lindemann criterion has to be made carefully in order to clearly identify the relative contributions in the whole disordering process.
Real-time video communication improves provider performance in a simulated neonatal resuscitation.
Fang, Jennifer L; Carey, William A; Lang, Tara R; Lohse, Christine M; Colby, Christopher E
2014-11-01
To determine if a real-time audiovisual link with a neonatologist, termed video-assisted resuscitation or VAR, improves provider performance during a simulated neonatal resuscitation scenario. Using high-fidelity simulation, 46 study participants were presented with a neonatal resuscitation scenario. The control group performed independently, while the intervention group utilized VAR. Time to effective ventilation was compared using Wilcoxon rank sum tests. Providers' use of the corrective steps for ineffective ventilation per the NRP algorithm was compared using Cochran-Armitage trend tests. The time needed to establish effective ventilation was significantly reduced in the intervention group when compared to the control group (mean time 2 min 42 s versus 4 min 11 s, p<0.001). In the setting of ineffective ventilation, only 35% of control subjects used three or more of the first five corrective steps and none of them used all five steps. Providers in the control group most frequently neglected to open the mouth and increase positive pressure. In contrast, all of those in the intervention group used all of the first five corrective steps, p<0.001. All participants in the control group decided to intubate the infant to establish effective ventilation, compared to none in the intervention group, p<0.001. Using VAR during a simulated neonatal resuscitation scenario significantly reduces the time to establish effective ventilation and improves provider adherence to NRP guidelines. This technology may be a means for regional centers to support local providers during a neonatal emergency to improve patient safety and improve neonatal outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu
2015-09-21
Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.
Automated Installation Verification of COMSOL via LiveLink for MATLAB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowell, Michael W
Verifying that a local software installation performs as the developer intends is a potentially time-consuming but necessary step for nuclear safety-related codes. Automating this process not only saves time, but can increase reliability and scope of verification compared to ‘hand’ comparisons. While COMSOL does not include automatic installation verification as many commercial codes do, it does provide tools such as LiveLink™ for MATLAB® and the COMSOL API for use with Java® through which the user can automate the process. Here we present a successful automated verification example of a local COMSOL 5.0 installation for nuclear safety-related calculations at the Oakmore » Ridge National Laboratory’s High Flux Isotope Reactor (HFIR).« less
Monocular Visual Odometry Based on Trifocal Tensor Constraint
NASA Astrophysics Data System (ADS)
Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.
2018-02-01
For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.
An INS/WiFi Indoor Localization System Based on the Weighted Least Squares.
Chen, Jian; Ou, Gang; Peng, Ao; Zheng, Lingxiang; Shi, Jianghong
2018-05-07
For smartphone indoor localization, an INS/WiFi hybrid localization system is proposed in this paper. Acceleration and angular velocity are used to estimate step lengths and headings. The problem with INS is that positioning errors grow with time. Using radio signal strength as a fingerprint is a widely used technology. The main problem with fingerprint matching is mismatching due to noise. Taking into account the different shortcomings and advantages, inertial sensors and WiFi from smartphones are integrated into indoor positioning. For a hybrid localization system, pre-processing techniques are used to enhance the WiFi signal quality. An inertial navigation system limits the range of WiFi matching. A Multi-dimensional Dynamic Time Warping (MDTW) is proposed to calculate the distance between the measured signals and the fingerprint in the database. A MDTW-based weighted least squares (WLS) is proposed for fusing multiple fingerprint localization results to improve positioning accuracy and robustness. Using four modes (calling, dangling, handheld and pocket), we carried out walking experiments in a corridor, a study room and a library stack room. Experimental results show that average localization accuracy for the hybrid system is about 2.03 m.
An INS/WiFi Indoor Localization System Based on the Weighted Least Squares
Chen, Jian; Ou, Gang; Zheng, Lingxiang; Shi, Jianghong
2018-01-01
For smartphone indoor localization, an INS/WiFi hybrid localization system is proposed in this paper. Acceleration and angular velocity are used to estimate step lengths and headings. The problem with INS is that positioning errors grow with time. Using radio signal strength as a fingerprint is a widely used technology. The main problem with fingerprint matching is mismatching due to noise. Taking into account the different shortcomings and advantages, inertial sensors and WiFi from smartphones are integrated into indoor positioning. For a hybrid localization system, pre-processing techniques are used to enhance the WiFi signal quality. An inertial navigation system limits the range of WiFi matching. A Multi-dimensional Dynamic Time Warping (MDTW) is proposed to calculate the distance between the measured signals and the fingerprint in the database. A MDTW-based weighted least squares (WLS) is proposed for fusing multiple fingerprint localization results to improve positioning accuracy and robustness. Using four modes (calling, dangling, handheld and pocket), we carried out walking experiments in a corridor, a study room and a library stack room. Experimental results show that average localization accuracy for the hybrid system is about 2.03 m. PMID:29735960
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Dam Thuy Trang; Tong, Quang Cong; Ledoux-Rak, Isabelle
In this work, local thermal effect induced by a continuous-wave laser has been investigated and exploited to optimize the low one-photon absorption (LOPA) direct laser writing (DLW) technique for fabrication of polymer-based microstructures. It was demonstrated that the temperature of excited SU8 photoresist at the focusing area increases to above 100 °C due to high excitation intensity and becomes stable at that temperature thanks to the use of a continuous-wave laser at 532 nm-wavelength. This optically induced thermal effect immediately completes the crosslinking process at the photopolymerized region, allowing obtain desired structures without using the conventional post-exposure bake (PEB) step, which ismore » usually realized after the exposure. Theoretical calculation of the temperature distribution induced by local optical excitation using finite element method confirmed the experimental results. LOPA-based DLW technique combined with optically induced thermal effect (local PEB) shows great advantages over the traditional PEB, such as simple, short fabrication time, high resolution. In particular, it allowed the overcoming of the accumulation effect inherently existed in optical lithography by one-photon absorption process, resulting in small and uniform structures with very short lattice constant.« less
Ji, Chen; Fan, Fan; Lou, Xuelin
2017-08-08
Phosphatidylinositol 4,5-bisphosphate (PI(4,5)P 2 ) signaling is transient and spatially confined in live cells. How this pattern of signaling regulates transmitter release and hormone secretion has not been addressed. We devised an optogenetic approach to control PI(4,5)P 2 levels in time and space in insulin-secreting cells. Combining this approach with total internal reflection fluorescence microscopy, we examined individual vesicle-trafficking steps. Unlike long-term PI(4,5)P 2 perturbations, rapid and cell-wide PI(4,5)P 2 reduction in the plasma membrane (PM) strongly inhibits secretion and intracellular Ca 2+ concentration ([Ca 2+ ] i ) responses, but not sytaxin1a clustering. Interestingly, local PI(4,5)P 2 reduction selectively at vesicle docking sites causes remarkable vesicle undocking from the PM without affecting [Ca 2+ ] i . These results highlight a key role of local PI(4,5)P 2 in vesicle tethering and docking, coordinated with its role in priming and fusion. Thus, different spatiotemporal PI(4,5)P 2 signaling regulates distinct steps of vesicle trafficking, and vesicle docking may be a key target of local PI(4,5)P 2 signaling in vivo. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Xia, Li C; Steele, Joshua A; Cram, Jacob A; Cardon, Zoe G; Simmons, Sheri L; Vallino, Joseph J; Fuhrman, Jed A; Sun, Fengzhu
2011-01-01
The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa.
2011-01-01
Background The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. Results We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. Conclusions The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa. PMID:22784572
Real-time automatic registration in optical surgical navigation
NASA Astrophysics Data System (ADS)
Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming
2016-05-01
An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.
NASA Astrophysics Data System (ADS)
Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.
2013-05-01
Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.
Warren, Barbour S; Maley, Mary; Sugarwala, Laura J; Wells, Martin T; Devine, Carol M
2010-01-01
Small Steps Are Easier Together (SmStep) was a locally-instituted, ecologically based intervention to increase walking by women. Participants were recruited from 10 worksites in rural New York State in collaboration with worksite leaders and Cooperative Extension educators. Worksite leaders were oriented and chose site specific strategies. Participants used pedometers and personalized daily and weekly step goals. Participants reported steps on web logs and received weekly e-mail reports over 10 weeks in the spring of 2008. Of 188 enrollees, 114 (61%) reported steps. Weekly goals were met by 53% of reporters. Intention to treat analysis revealed a mean increase of 1503 daily steps. Movement to a higher step zone over their baseline zone was found for: 52% of the sedentary (n=80); 29% of the low active (n=65); 13% of the somewhat active (n=28); and 18% of the active participants (n=10). This placed 36% of enrollees at the somewhat active or higher zones (23% at baseline, p<0.005). Workers increased walking steps through a goal-based intervention in rural worksites. The SmStep intervention provides a model for a group-based, locally determined, ecological strategy to increase worksite walking supported by local community educators and remote messaging using email and a web site. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Study protocol: Community Links to Establish Alcohol Recovery (CLEAR) for women leaving jail.
Johnson, Jennifer E; Schonbrun, Yael Chatav; Anderson, Bradley; Kurth, Megan; Timko, Christine; Stein, Michael
2017-04-01
This article describes the protocol for a randomized effectiveness trial of a method to link alcohol use disordered women who are in pretrial jail detention with post-release 12-step mutual help groups. Jails serve 15 times more people per year than do prisons and have very short stays, posing few opportunities for treatment or treatment planning. Alcohol use is associated with poor post-jail psychosocial and health outcomes including sexually transmitted diseases and HIV, especially for women. At least weekly 12-step self-help group attendance in the months after release from jail has been associated with improvements in alcohol use and alcohol-related consequences. Linkage strategies improve 12-step attendance and alcohol outcomes among outpatients, but have not previously been tested in criminal justice populations. In the intervention condition, a 12-step volunteer meets once individually with an incarcerated woman while she is in jail and arranges to be in contact after release to accompany her to 12-step meetings. The control condition provides schedules for local 12-step meetings. Outcomes include percent days abstinent from alcohol (primary), 12-step meeting involvement, and fewer unprotected sexual occasions (secondary) after release from jail. We hypothesize that (Minton, 2015) 12-step involvement will mediate the intervention's effect on alcohol use, and (O'Brien, 2001) percent days abstinent will mediate the intervention's effect on STI/HIV risk-taking outcomes. Research methods accommodate logistical and philosophical hurdles including rapid turnover of commitments and unpredictable release times at the jail, possible post-randomization ineligibility due to sentencing, 12-step principles such as Nonaffiliation, and use of volunteers as interventionists. Copyright © 2017 Elsevier Inc. All rights reserved.
Castro, André; Nascimento, Tiago P.
2017-01-01
Natural landmarks are the main features in the next step of the research in localization of mobile robot platforms. The identification and recognition of these landmarks are crucial to better localize a robot. To help solving this problem, this work proposes an approach for the identification and recognition of natural marks included in the environment using images from RGB-D (Red, Green, Blue, Depth) sensors. In the identification step, a structural analysis of the natural landmarks that are present in the environment is performed. The extraction of edge points of these landmarks is done using the 3D point cloud obtained from the RGB-D sensor. These edge points are smoothed through the Sl0 algorithm, which minimizes the standard deviation of the normals at each point. Then, the second step of the proposed algorithm begins, which is the proper recognition of the natural landmarks. This recognition step is done as a real-time algorithm that extracts the points referring to the filtered edges and determines to which structure they belong to in the current scenario: stairs or doors. Finally, the geometrical characteristics that are intrinsic to the doors and stairs are identified. The approach proposed here has been validated with real robot experiments. The performed tests verify the efficacy of our proposed approach. PMID:28786925
KGCS and ECS Local HMI Display Control System Engineer
NASA Technical Reports Server (NTRS)
Curtis, Bryan
2017-01-01
My time here at KSC has involved creating and updating HMI displays to support Pad 39B and the Mobile Launcher. I also had the opportunity to be involved with testing PLC hardware for Electromagnetic interferences. This report explains in more detail of the steps involved in successfully completing these responsibilities I have been fortunate enough to be involved with.
OZONE MONITORING, MAPPING, AND PUBLIC OUTREACH ...
The U.S. EPA had developed a handbook to help state and local government officials implement ozone monitoring, mapping, and outreach programs. The handbook, called Ozone Monitoring, Mapping, and Public Outreach: Delivering Real-Time Ozone Information to Your Community, provides step-by-step instructions on how to: Design, site, operate, and maintain an ozone monitoring network. Install, configure, and operate the Automatic Data Transfer System Use MapGen software to create still-frame and animated ozone maps. Develop and outreach plan to communicate information about real-time ozone levels and their health effects to the public.This handbook was developed by EPA's EMPACT program. The program takes advantage of new technologies that make it possible to provide environmental information to the public in near real time. EMPACT is working with the 86 largest metropolitan areas of the country to help communities in these areas: Collect, manage and distribute time-relevant environmental information. Provide their residents with easy-to-understand information they can use in making informed, day-to-day decisions. Information
Epidemic outbreaks in growing scale-free networks with local structure
NASA Astrophysics Data System (ADS)
Ni, Shunjiang; Weng, Wenguo; Shen, Shifei; Fan, Weicheng
2008-09-01
The class of generative models has already attracted considerable interest from researchers in recent years and much expanded the original ideas described in BA model. Most of these models assume that only one node per time step joins the network. In this paper, we grow the network by adding n interconnected nodes as a local structure into the network at each time step with each new node emanating m new edges linking the node to the preexisting network by preferential attachment. This successfully generates key features observed in social networks. These include power-law degree distribution pk∼k, where μ=(n-1)/m is a tuning parameter defined as the modularity strength of the network, nontrivial clustering, assortative mixing, and modular structure. Moreover, all these features are dependent in a similar way on the parameter μ. We then study the susceptible-infected epidemics on this network with identical infectivity, and find that the initial epidemic behavior is governed by both of the infection scheme and the network structure, especially the modularity strength. The modularity of the network makes the spreading velocity much lower than that of the BA model. On the other hand, increasing the modularity strength will accelerate the propagation velocity.
Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B
2011-09-01
Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting and analyzing burning cells from a large-scale turbulent combustion simulation. In particular, we show how the statistical analysis enabled by our techniques provides new insight into the combustion process.
Estimating the number of people in crowded scenes
NASA Astrophysics Data System (ADS)
Kim, Minjin; Kim, Wonjun; Kim, Changick
2011-01-01
This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.
A discrete-time localization method for capsule endoscopy based on on-board magnetic sensing
NASA Astrophysics Data System (ADS)
Salerno, Marco; Ciuti, Gastone; Lucarini, Gioia; Rizzo, Rocco; Valdastri, Pietro; Menciassi, Arianna; Landi, Alberto; Dario, Paolo
2012-01-01
Recent achievements in active capsule endoscopy have allowed controlled inspection of the bowel by magnetic guidance. Capsule localization represents an important enabling technology for such kinds of platforms. In this paper, the authors present a localization method, applied as first step in time-discrete capsule position detection, that is useful for establishing a magnetic link at the beginning of an endoscopic procedure or for re-linking the capsule in the case of loss due to locomotion. The novelty of this approach consists in using magnetic sensors on board the capsule whose output is combined with pre-calculated magnetic field analytical model solutions. A magnetic field triangulation algorithm is used for obtaining the position of the capsule inside the gastrointestinal tract. Experimental validation has demonstrated that the proposed procedure is stable, accurate and has a wide localization range in a volume of about 18 × 103 cm3. Position errors of 14 mm along the X direction, 11 mm along the Y direction and 19 mm along the Z direction were obtained in less than 27 s of elaboration time. The proposed approach, being compatible with magnetic fields used for locomotion, can be easily extended to other platforms for active capsule endoscopy.
Absolute phase estimation: adaptive local denoising and global unwrapping.
Bioucas-Dias, Jose; Katkovnik, Vladimir; Astola, Jaakko; Egiazarian, Karen
2008-10-10
The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details. (c) 2008 Optical Society of America
Soós, Reka; Whiteman, Andrew D; Wilson, David C; Briciu, Cosmin; Nürnberger, Sofia; Oelz, Barbara; Gunsilius, Ellen; Schwehn, Ekkehard
2017-08-01
This is the second of two papers reporting the results of a major study considering 'operator models' for municipal solid waste management (MSWM) in emerging and developing countries. Part A documents the evidence base, while Part B presents a four-step decision support system for selecting an appropriate operator model in a particular local situation. Step 1 focuses on understanding local problems and framework conditions; Step 2 on formulating and prioritising local objectives; and Step 3 on assessing capacities and conditions, and thus identifying strengths and weaknesses, which underpin selection of the operator model. Step 4A addresses three generic questions, including public versus private operation, inter-municipal co-operation and integration of services. For steps 1-4A, checklists have been developed as decision support tools. Step 4B helps choose locally appropriate models from an evidence-based set of 42 common operator models ( coms); decision support tools here are a detailed catalogue of the coms, setting out advantages and disadvantages of each, and a decision-making flowchart. The decision-making process is iterative, repeating steps 2-4 as required. The advantages of a more formal process include avoiding pre-selection of a particular com known to and favoured by one decision maker, and also its assistance in identifying the possible weaknesses and aspects to consider in the selection and design of operator models. To make the best of whichever operator models are selected, key issues which need to be addressed include the capacity of the public authority as 'client', management in general and financial management in particular.
Dynamic phase transition in the prisoner's dilemma on a lattice with stochastic modifications
NASA Astrophysics Data System (ADS)
Saif, M. Ali; Gade, Prashant M.
2010-03-01
We present a detailed study of the prisoner's dilemma game with stochastic modifications on a two-dimensional lattice, in the presence of evolutionary dynamics. By very nature of the rules, the cooperators have incentives to cheat and fear being cheated. They may cheat even when this is not dictated by the evolutionary dynamics. We consider two variants here. In each case, the agents mimic the action (cooperation or defection) in the previous time step of the most successful agent in the neighborhood. But over and above this, the fraction p of cooperators spontaneously change their strategy to pure defector at every time step in the first variant. In the second variant, there are no pure cooperators. All cooperators keep defecting with probability p at every time step. In both cases, the system switches from a coexistence state to an all-defector state for higher values of p. We show that the transition between these states unambiguously belongs to the directed percolation universality class in 2 + 1 dimensions. We also study the local persistence. The persistence exponents obtained are higher than the ones obtained in previous studies, underlining their dependence on details of the dynamics.
iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization.
Blenkmann, Alejandro O; Phillips, Holly N; Princich, Juan P; Rowe, James B; Bekinschtein, Tristan A; Muravchik, Carlos H; Kochen, Silvia
2017-01-01
The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2-3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions.
iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization
Blenkmann, Alejandro O.; Phillips, Holly N.; Princich, Juan P.; Rowe, James B.; Bekinschtein, Tristan A.; Muravchik, Carlos H.; Kochen, Silvia
2017-01-01
The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2–3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions. PMID:28303098
Ryu, Young Hee; Kenny, Andrew; Gim, Youme; Snee, Mark; Macdonald, Paul M
2017-09-15
Localization of mRNAs can involve multiple steps, each with its own cis -acting localization signals and transport factors. How is the transition between different steps orchestrated? We show that the initial step in localization of Drosophila oskar mRNA - transport from nurse cells to the oocyte - relies on multiple cis -acting signals. Some of these are binding sites for the translational control factor Bruno, suggesting that Bruno plays an additional role in mRNA transport. Although transport of oskar mRNA is essential and robust, the localization activity of individual transport signals is weak. Notably, increasing the strength of individual transport signals, or adding a strong transport signal, disrupts the later stages of oskar mRNA localization. We propose that the oskar transport signals are weak by necessity; their weakness facilitates transfer of the oskar mRNA from the oocyte transport machinery to the machinery for posterior localization. © 2017. Published by The Company of Biologists Ltd.
Controlling the Local Electronic Properties of Si(553)-Au through Hydrogen Doping
NASA Astrophysics Data System (ADS)
Hogan, C.; Speiser, E.; Chandola, S.; Suchkova, S.; Aulbach, J.; Schäfer, J.; Meyer, S.; Claessen, R.; Esser, N.
2018-04-01
We propose a quantitative and reversible method for tuning the charge localization of Au-stabilized stepped Si surfaces by site-specific hydrogenation. This is demonstrated for Si(553)-Au as a model system by combining density functional theory simulations and reflectance anisotropy spectroscopy experiments. We find that controlled H passivation is a two-step process: step-edge adsorption drives excess charge into the conducting metal chain "reservoir" and renders it insulating, while surplus H recovers metallic behavior. Our approach illustrates a route towards microscopic manipulation of the local surface charge distribution and establishes a reversible switch of site-specific chemical reactivity and magnetic properties on vicinal surfaces.
NASA Technical Reports Server (NTRS)
Atkins, Harold
1991-01-01
A multiple block multigrid method for the solution of the three dimensional Euler and Navier-Stokes equations is presented. The basic flow solver is a cell vertex method which employs central difference spatial approximations and Runge-Kutta time stepping. The use of local time stepping, implicit residual smoothing, multigrid techniques and variable coefficient numerical dissipation results in an efficient and robust scheme is discussed. The multiblock strategy places the block loop within the Runge-Kutta Loop such that accuracy and convergence are not affected by block boundaries. This has been verified by comparing the results of one and two block calculations in which the two block grid is generated by splitting the one block grid. Results are presented for both Euler and Navier-Stokes computations of wing/fuselage combinations.
A cell-vertex multigrid method for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Radespiel, R.
1989-01-01
A cell-vertex scheme for the Navier-Stokes equations, which is based on central difference approximations and Runge-Kutta time stepping, is described. Using local time stepping, implicit residual smoothing, a multigrid method, and carefully controlled artificial dissipative terms, very good convergence rates are obtained for a wide range of two- and three-dimensional flows over airfoils and wings. The accuracy of the code is examined by grid refinement studies and comparison with experimental data. For an accurate prediction of turbulent flows with strong separations, a modified version of the nonequilibrium turbulence model of Johnson and King is introduced, which is well suited for an implementation into three-dimensional Navier-Stokes codes. It is shown that the solutions for three-dimensional flows with strong separations can be dramatically improved, when a nonequilibrium model of turbulence is used.
Multibeam 3D Underwater SLAM with Probabilistic Registration.
Palomer, Albert; Ridao, Pere; Ribas, David
2016-04-20
This paper describes a pose-based underwater 3D Simultaneous Localization and Mapping (SLAM) using a multibeam echosounder to produce high consistency underwater maps. The proposed algorithm compounds swath profiles of the seafloor with dead reckoning localization to build surface patches (i.e., point clouds). An Iterative Closest Point (ICP) with a probabilistic implementation is then used to register the point clouds, taking into account their uncertainties. The registration process is divided in two steps: (1) point-to-point association for coarse registration and (2) point-to-plane association for fine registration. The point clouds of the surfaces to be registered are sub-sampled in order to decrease both the computation time and also the potential of falling into local minima during the registration. In addition, a heuristic is used to decrease the complexity of the association step of the ICP from O(n2) to O(n) . The performance of the SLAM framework is tested using two real world datasets: First, a 2.5D bathymetric dataset obtained with the usual down-looking multibeam sonar configuration, and second, a full 3D underwater dataset acquired with a multibeam sonar mounted on a pan and tilt unit.
Bao, Jianqiang; Wu, Qiuxia; Song, Rui; Jie, Zhang; Zheng, Huili; Xu, Chen; Yan, Wei
2010-01-01
We identified Ran-binding protein 17 (RANBP17) as one of the interacting partners of sperm maturation 1 (SPEM1) using yeast 2-hybrid screening and immunoprecipitation assays. Expression profiling analyses suggested that RANBP17 was preferentially expressed in the testis. Immunofluorescent confocal microscopy revealed a dynamic localization pattern of RANBP17 during spermatogenesis. In primary spermatocytes RANBP17 was mainly localized to the XY body. In the subsequent spermiogenesis, RANBP17 was first observed in the nuclei of round spermatids (steps1–7) and then confined to the manchette of elongating spermatids (steps 8–14) together with its interacting partner SPEM1. In the Spem1-null testes, levels of RANBP17 were significantly elevated. As a member of a large protein family involved in the nucleocytoplasmic transport, RANBP17 may have a role in sex chromosome inactivation during the meiotic phase of spermatogenesis, and also in the intramanchette transport during spermiogenesis. Interactions between RANBP17 and SPEM1, for the first time, point to a potential function of SPEM1 in the RANBP17-mediated nucleocytoplasmic transport. PMID:21184802
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
NASA Astrophysics Data System (ADS)
Kumar, Keshav; Shukla, Sumitra; Singh, Sachin Kumar
2018-04-01
Periodic impulses arise due to localised defects in rolling element bearing. At the early stage of defects, the weak impulses are immersed in strong machinery vibration. This paper proposes a combined approach based upon Hilbert envelop and zero frequency resonator for the detection of the weak periodic impulses. In the first step, the strength of impulses is increased by taking normalised Hilbert envelop of the signal. It also helps in better localization of these impulses on time axis. In the second step, Hilbert envelope of the signal is passed through the zero frequency resonator for the exact localization of the periodic impulses. Spectrum of the resonator output gives peak at the fault frequency. Simulated noisy signal with periodic impulses is used to explain the working of the algorithm. The proposed technique is verified with experimental data also. A comparison of the proposed method with Hilbert-Haung transform (HHT) based method is presented to establish the effectiveness of the proposed method.
Szymański, Jedrzej; Mayer, Christine; Hoffmann-Rohrer, Urs; Kalla, Claudia; Grummt, Ingrid; Weiss, Matthias
2009-07-01
TIF-IA is a basal transcription factor of RNA polymerase I (Pol I) that is a major target of the JNK2 signaling pathway in response to ribotoxic stress. Using advanced fluorescence microscopy and kinetic modeling we elucidated the subcellular localization of TIF-IA and its exchange dynamics between the nucleolus, nucleoplasm and cytoplasm upon ribotoxic stress. In steady state, the majority of (GFP-tagged) TIF-IA was in the cytoplasm and the nucleus, a minor portion (7%) localizing to the nucleoli. We observed a rapid shuttling of GFP-TIF-IA between the different cellular compartments with a mean residence time of approximately 130 s in the nucleus and only approximately 30 s in the nucleoli. The import rate from the cytoplasm to the nucleus was approximately 3-fold larger than the export rate, suggesting an importin/exportin-mediated transport rather than a passive diffusion. Upon ribotoxic stress, GFP-TIF-IA was released from the nucleoli with a half-time of approximately 24 min. Oxidative stress and inhibition of protein synthesis led to a relocation of GFP-TIF-IA with slower kinetics while osmotic stress had no effect. The observed relocation was much slower than the nucleo-cytoplasmic and nucleus-nucleolus exchange rates of GFP-TIF-IA, indicating a time-limiting step upstream of the JNK2 pathway. In support of this, time-course experiments on the activity of JNK2 revealed the activation of the JNK kinase as the rate-limiting step.
Automated segmentation of linear time-frequency representations of marine-mammal sounds.
Dadouchi, Florian; Gervaise, Cedric; Ioana, Cornel; Huillery, Julien; Mars, Jérôme I
2013-09-01
Many marine mammals produce highly nonlinear frequency modulations. Determining the time-frequency support of these sounds offers various applications, which include recognition, localization, and density estimation. This study introduces a low parameterized automated spectrogram segmentation method that is based on a theoretical probabilistic framework. In the first step, the background noise in the spectrogram is fitted with a Chi-squared distribution and thresholded using a Neyman-Pearson approach. In the second step, the number of false detections in time-frequency regions is modeled as a binomial distribution, and then through a Neyman-Pearson strategy, the time-frequency bins are gathered into regions of interest. The proposed method is validated on real data of large sequences of whistles from common dolphins, collected in the Bay of Biscay (France). The proposed method is also compared with two alternative approaches: the first is smoothing and thresholding of the spectrogram; the second is thresholding of the spectrogram followed by the use of morphological operators to gather the time-frequency bins and to remove false positives. This method is shown to increase the probability of detection for the same probability of false alarms.
Brandon M. Collins; Scott L. Stephens; Gary B. Roller; John Battles
2011-01-01
We evaluate an actual landscape fuel treatment project that was designed by local U. S. Forest Service managers in the northern Sierra Nevada. We model the effects of this project at reducing landscape-level fire behavior at multiple time steps, up to nearly 30 yr beyond treatment implementation. Additionally, we modeled planned treatments under multiple diameter-...
2010-03-01
LtSTA. Unlike local AE where the reaction site can be identified with a specific article , it was not possible to identify the article responsible...taken prior to the last centrifugation step to determine yield (cell concentration), bioburden and morphology. The final centrifugation run is...protein and phenol may be made one time with 1X phosphate diluent. Bioburden is tested after any adjustment is made and prior to sterile filtration
Levine, Judah
2016-01-01
A method is presented for synchronizing the time of a clock to a remote time standard when the channel connecting the two has significant delay variation that can be described only statistically. The method compares the Allan deviation of the channel fluctuations to the free-running stability of the local clock, and computes the optimum interval between requests based on one of three selectable requirements: (1) choosing the highest possible accuracy, (2) choosing the best tradeoff of cost vs. accuracy, or (3) minimizing the number of requests to realize a specific accuracy. Once the interval between requests is chosen, the final step is to steer the local clock based on the received data. A typical adjustment algorithm, which supports both the statistical considerations based on the Allan deviation comparison and the timely detection of errors is included as an example. PMID:26529759
Capturing Pressure Oscillations in Numerical Simulations of Internal Combustion Engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gubba, Sreenivasa Rao; Jupudi, Ravichandra S.; Pasunurthi, Shyam Sundar
In an earlier publication, the authors compared numerical predictions of the mean cylinder pressure of diesel and dual-fuel combustion, to that of measured pressure data from a medium-speed, large-bore engine. In these earlier comparisons, measured data from a flush-mounted in-cylinder pressure transducer showed notable and repeatable pressure oscillations which were not evident in the mean cylinder pressure predictions from computational fluid dynamics (CFD). In this paper, the authors present a methodology for predicting and reporting the local cylinder pressure consistent with that of a measurement location. Such predictions for large-bore, medium-speed engine operation demonstrate pressure oscillations in accordance with thosemore » measured. The temporal occurrences of notable pressure oscillations were during the start of combustion and around the time of maximum cylinder pressure. With appropriate resolutions in time steps and mesh sizes, the local cell static pressure predicted for the transducer location showed oscillations in both diesel and dual-fuel combustion modes which agreed with those observed in the experimental data. Fast Fourier transform (FFT) analysis on both experimental and calculated pressure traces revealed that the CFD predictions successfully captured both the amplitude and frequency range of the oscillations. Furthermore, resolving propagating pressure waves with the smaller time steps and grid sizes necessary to achieve these results required a significant increase in computer resources.« less
NASA Astrophysics Data System (ADS)
Maljaars, Jakob M.; Labeur, Robert Jan; Möller, Matthias
2018-04-01
A generic particle-mesh method using a hybridized discontinuous Galerkin (HDG) framework is presented and validated for the solution of the incompressible Navier-Stokes equations. Building upon particle-in-cell concepts, the method is formulated in terms of an operator splitting technique in which Lagrangian particles are used to discretize an advection operator, and an Eulerian mesh-based HDG method is employed for the constitutive modeling to account for the inter-particle interactions. Key to the method is the variational framework provided by the HDG method. This allows to formulate the projections between the Lagrangian particle space and the Eulerian finite element space in terms of local (i.e. cellwise) ℓ2-projections efficiently. Furthermore, exploiting the HDG framework for solving the constitutive equations results in velocity fields which excellently approach the incompressibility constraint in a local sense. By advecting the particles through these velocity fields, the particle distribution remains uniform over time, obviating the need for additional quality control. The presented methodology allows for a straightforward extension to arbitrary-order spatial accuracy on general meshes. A range of numerical examples shows that optimal convergence rates are obtained in space and, given the particular time stepping strategy, second-order accuracy is obtained in time. The model capabilities are further demonstrated by presenting results for the flow over a backward facing step and for the flow around a cylinder.
Capturing Pressure Oscillations in Numerical Simulations of Internal Combustion Engines
Gubba, Sreenivasa Rao; Jupudi, Ravichandra S.; Pasunurthi, Shyam Sundar; ...
2018-04-09
In an earlier publication, the authors compared numerical predictions of the mean cylinder pressure of diesel and dual-fuel combustion, to that of measured pressure data from a medium-speed, large-bore engine. In these earlier comparisons, measured data from a flush-mounted in-cylinder pressure transducer showed notable and repeatable pressure oscillations which were not evident in the mean cylinder pressure predictions from computational fluid dynamics (CFD). In this paper, the authors present a methodology for predicting and reporting the local cylinder pressure consistent with that of a measurement location. Such predictions for large-bore, medium-speed engine operation demonstrate pressure oscillations in accordance with thosemore » measured. The temporal occurrences of notable pressure oscillations were during the start of combustion and around the time of maximum cylinder pressure. With appropriate resolutions in time steps and mesh sizes, the local cell static pressure predicted for the transducer location showed oscillations in both diesel and dual-fuel combustion modes which agreed with those observed in the experimental data. Fast Fourier transform (FFT) analysis on both experimental and calculated pressure traces revealed that the CFD predictions successfully captured both the amplitude and frequency range of the oscillations. Furthermore, resolving propagating pressure waves with the smaller time steps and grid sizes necessary to achieve these results required a significant increase in computer resources.« less
Using Curved Crystals to Study Terrace-Width Distributions.
NASA Astrophysics Data System (ADS)
Einstein, Theodore L.
Recent experiments on curved crystals of noble and late transition metals (Ortega and Juurlink groups) have renewed interest in terrace width distributions (TWD) for vicinal surfaces. Thus, it is timely to discuss refinements of TWD analysis that are absent from the standard reviews. Rather than by Gaussians, TWDs are better described by the generalized Wigner surmise, with a power-law rise and a Gaussian decay, thereby including effects evident for weak step repulsion: skewness and peak shifts down from the mean spacing. Curved crystals allow analysis of several mean spacings with the same substrate, so that one can check the scaling with the mean width. This is important since such scaling confirms well-established theory. Failure to scale also can provide significant insights. Complicating factors can include step touching (local double-height steps), oscillatory step interactions mediated by metallic (but not topological) surface states, short-range corrections to the inverse-square step repulsion, and accounting for the offset between adjacent layers of almost all surfaces. We discuss how to deal with these issues. For in-plane misoriented steps there are formulas to describe the stiffness but not yet the strength of the elastic interstep repulsion. Supported in part by NSF-CHE 13-05892.
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
Robust finger vein ROI localization based on flexible segmentation.
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-10-24
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.
Robust Finger Vein ROI Localization Based on Flexible Segmentation
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-01-01
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning.
Zhong, Shan; Liu, Quan; Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2 -regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.
Method for localizing and isolating an errant process step
Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.
2003-01-01
A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.
Global/local methods for probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.
1993-01-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Global/local methods for probabilistic structural analysis
NASA Astrophysics Data System (ADS)
Millwater, H. R.; Wu, Y.-T.
1993-04-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Solar Project Development Pathway & Resources
The Local Government Solar Project Portal's Solar Project Development Pathway and Resources page details the major steps along the project development pathway and each step includes resources and tools to assist you with that step.
The CERCA School Report Card: Communities Creating Education Quality. Implementation Manual
ERIC Educational Resources Information Center
Guio, Ana Florez; Chesterfield, Ray; Siri, Carmen
2006-01-01
This manual provides a step-by-step methodology for promoting community participation in improving learning in local schools. The Civic Engagement for Education Reform in Central America (CERCA) School Report Card (SRC) approach empowers local school communities to gather information on the quality and conditions of teaching and learning in their…
A sandpile model of grain blocking and consequences for sediment dynamics in step-pool streams
NASA Astrophysics Data System (ADS)
Molnar, P.
2012-04-01
Coarse grains (cobbles to boulders) are set in motion in steep mountain streams by floods with sufficient energy to erode the particles locally and transport them downstream. During transport, grains are often blocked and form width-spannings structures called steps, separated by pools. The step-pool system is a transient, self-organizing and self-sustaining structure. The temporary storage of sediment in steps and the release of that sediment in avalanche-like pulses when steps collapse, leads to a complex nonlinear threshold-driven dynamics in sediment transport which has been observed in laboratory experiments (e.g., Zimmermann et al., 2010) and in the field (e.g., Turowski et al., 2011). The basic question in this paper is if the emergent statistical properties of sediment transport in step-pool systems may be linked to the transient state of the bed, i.e. sediment storage and morphology, and to the dynamics in sediment input. The hypothesis is that this state, in which sediment transporting events due to the collapse and rebuilding of steps of all sizes occur, is analogous to a critical state in self-organized open dissipative dynamical systems (Bak et al., 1988). To exlore the process of self-organization, a cellular automaton sandpile model is used to simulate the processes of grain blocking and hydraulically-driven step collapse in a 1-d channel. Particles are injected at the top of the channel and are allowed to travel downstream based on various local threshold rules, with the travel distance drawn from a chosen probability distribution. In sandpile modelling this is a simple 1-d limited non-local model, however it has been shown to have nontrivial dynamical behaviour (Kadanoff et al., 1989), and it captures the essence of stochastic sediment transport in step-pool systems. The numerical simulations are used to illustrate the differences between input and output sediment transport rates, mainly focussing on the magnification of intermittency and variability in the system response by the processes of grain blocking and step collapse. The temporal correlation in input and output rates and the number of grains stored in the system at any given time are quantified by spectral analysis and statistics of long-range dependence. Although the model is only conceptually conceived to represent the real processes of step formation and collapse, connections will be made between the modelling results and some field and laboratory data on step-pool systems. The main focus in the discussion will be to demonstrate how even in such a simple model the processes of grain blocking and step collapse may impact the sediment transport rates to the point that certain changes in input are not visible anymore, along the lines of "shredding the signals" proposed by Jerolmack and Paola (2010). The consequences are that the notions of stability and equilibrium, the attribution of cause and effect, and the timescales of process and form in step-pool systems, and perhaps in many other fluvial systems, may have very limited applicability.
Lanczos eigensolution method for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1991-01-01
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.
Informed peg-in-hole insertion using optical sensors
NASA Astrophysics Data System (ADS)
Paulos, Eric; Canny, John F.
1993-08-01
Peg-in-hole insertion is not only a longstanding problem in robotics but the most common automated mechanical assembly task. In this paper we present a high precision, self-calibrating peg-in-hole insertion strategy using several very simple, inexpensive, and accurate optical sensors. The self-calibrating feature allows us to achieve successful dead-reckoning insertions with tolerances of 25 microns without any accurate initial position information for the robot, pegs, or holes. The program we implemented works for any cylindrical peg, and the sensing steps do not depend on the peg diameter, which the program does not know. The key to the strategy is the use of a fixed sensor to localize both a mobile sensor and the peg, while the mobile sensor localizes the hole. Our strategy is extremely fast, localizing pegs as they are in route to their insertion location without pausing. The result is that insertion times are dominated by the transport time between pick and place operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, Thi Nu; Ono, Shota; Ohno, Kaoru, E-mail: ohno@ynu.ac.jp
Doing ab initio molecular dynamics simulations, we demonstrate a possibility of hydrogenation of carbon monoxide producing methanol step by step. At first, the hydrogen atom reacts with the carbon monoxide molecule at the excited state forming the formyl radical. Formaldehyde was formed after adding one more hydrogen atom to the system. Finally, absorption of two hydrogen atoms to formaldehyde produces methanol molecule. This study is performed by using the all-electron mixed basis approach based on the time dependent density functional theory within the adiabatic local density approximation for an electronic ground-state configuration and the one-shot GW approximation for an electronicmore » excited state configuration.« less
Seismic Study of the Dynamics of the Solar Subsurface from SoHO Observations
NASA Technical Reports Server (NTRS)
Korzennik, Sylvain G.; Wagner, William J. (Technical Monitor)
2001-01-01
In collaboration with Dr. Baudin, we have developed and refined the new observational methodology for local helioseismology known as time-distance analysis. Global helioseismology study the solar oscillations as a superposition of resonant modes, whose properties (mode frequencies) reflect the global structure of the sun (sound speed stratification, rotation rate, etc). In contrast, local helioseismology look at the solar oscillations as wave packets whose propagation will be affected by perturbations of the media sampled. These local perturbations (sound speed or velocity flows) will modify the propagation time, that in turn can be used as a diagnostic tool for a given region. From a data reduction perspective, the processing of solar dopplergrams that result in time-distance maps, i.e. propagation times as a function of distance between bounces at the surface, is radically different from the methodology used for global mode analysis. We have, in a first step, further develop the programs needed to carry out such analysis. We have then applied them to NMI data set, and explore the trade-off between various averaging and filtering approaches - steps required to improve the signal-to-noise ratio of correlation maps - and the resulting stability and precision of the fitted propagation times. While excessive averaging (whether over space, propagation distance, or time) will reduce the diagnostic potential of the method, insufficient averaging lead to unstable fits, or uncertainties so large as to hide the information we seek. In a second phase, we have developed the analysis methodology required to infer local properties from perturbation in time propagation. Namely, we have developed time-distance inversion techniques, with an emphasis on inferences of velocity flows from time anomalies. Note also that during the period covered by this grant, all the investigators on this proposal (i.e., Drs. Baudin, Eff-Darwich, Korzennik, and Noyes) took part in the organization of the SOHO 6 /GONG 99 Workshop: Structure and Dynamics of the Interior of the Sun and Sunlike Stars, held on June 1-4 1999 at the Boston Park Plaza Hotel in Boston, Massachusetts, USA. it was very well attended by more than 160 participants from 26 countries from all over the world. The proceedings were published in two volumes as ESA SP-418, with Sessions I-III in Volume 1, and Sessions IV-VI in Volume 2 (1,000 pages in total). The complete contents are also included in digital form on a CD-ROM included with Volume 1. This CD-ROM also contains additional multi-media material that complements some of the contributions.
Kielar, Ania Z; El-Maraghi, Robert H; Schweitzer, Mark E
2010-08-01
In Canada, equal access to health care is the goal, but this is associated with wait times. Wait times should be fair rather than uniform, taking into account the urgency of the problem as well as the time an individual has already waited. In November 2004, the Ontario government began addressing this issue. One of the first steps was to institute benchmarks reflecting "acceptable" wait times for CT and MRI. A public Web site was developed indicating wait times at each Local Health Integration Network. Since starting the Wait Time Information Program, there has been a sustained reduction in wait times for Ontarians requiring CT and MRI. The average wait time for a CT scan went from 81 days in September 2005 to 47 days in September 2009. For MRI, the resulting wait time was reduced from 120 to 105 days. Increased patient scans have been achieved by purchasing new CT and MRI scanners, expanding hours of operation, and improving patient throughput using strategies learned from the Lean initiative, based on Toyota's manufacturing philosophy for car production. Institution-specific changes in booking procedures have been implemented. Concurrently, government guidelines have been developed to ensure accountability for monies received. The Ontario Wait Time Information Program is an innovative first step in improving fair and equitable access to publicly funded imaging services. There have been reductions in wait times for both CT and MRI. As various new processes are implemented, further review will be necessary for each step to determine their individual efficacy. Copyright 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Animal Construction as a Free Boundary Problem: Evidence of Fractal Scaling Laws
NASA Astrophysics Data System (ADS)
Nicolis, S. C.
2014-12-01
We suggest that the main features of animal construction can be understood as the sum of locally independent actions of non-interacting individuals subjected to the global constraints imposed by the nascent structure. We first formulate an analytically tractable oscopic description of construction which predicts a 1/3 power law for how the length of the structure grows with time. We further show how the power law is modified when biases in random walk performed by the constructors as well as halting times between consecutive construction steps are included.
Green Schools Energy Project: A Step-by-Step Manual.
ERIC Educational Resources Information Center
Quigley, Gwen
This publication contains a step-by-step guide for implementing an energy-saving project in local school districts: the installation of newer, more energy-efficient "T-8" fluorescent tube lights in place of "T-12" lights. Eleven steps are explained in detail: (1) find out what kind of lights the school district currently uses;…
NASA Astrophysics Data System (ADS)
Xue, Nan; Khodaparast, Sepideh; Zhu, Lailai; Nunes, Janine; Kim, Hyoungsoo; Stone, Howard
2017-11-01
Layered composite fluids are sometimes observed in confined systems of rather chaotic initial states, for example, layered lattes formed by pouring espresso into a glass of warm milk. In such configurations, pouring forces a lower density liquid (espresso) into a higher density ambient, which is similar to the fountain effects that characterize a wide range of flows driven by injecting a fluid into a second miscible phase. Although the initial state of the mixture is complex and chaotic, there are conditions where the mixture cools at room temperature and exhibits an organized layered pattern. Here we report controlled experiments injecting a fluid into a miscible phase and show that, above a critical injection velocity, layering naturally emerges over the time scale of minutes. We perform experimental and numerical analyses of the time-dependent flows to observe and understand the convective circulation in the layers. We identify critical conditions to produce the layering and relate the results quantitatively to the critical Rayleigh number in double-diffusive convection, which indicates the competition between the horizontal thermal gradient and the vertical density gradient generated by the fluid injection. Based on this understanding, we show how to employ this single-step process to produce layered structures in soft materials, where the local elastic properties as well as the local material concentration vary step-wise along the length of the material.
Idle waves in high-performance computing
NASA Astrophysics Data System (ADS)
Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre
2015-01-01
The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.
Outlier detection for particle image velocimetry data using a locally estimated noise variance
NASA Astrophysics Data System (ADS)
Lee, Yong; Yang, Hua; Yin, ZhouPing
2017-03-01
This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.
Steps toward Gaining Knowledge of World Music Pedagogy
ERIC Educational Resources Information Center
Carlisle, Katie
2013-01-01
This article presents steps toward gaining knowledge of world music pedagogy for K-12 general music educators. The majority of the article details steps that invite engagement within everyday contexts with accessible resources within local and online communities. The steps demonstrate ways general music teachers can diversify and self-direct their…
40 CFR 35.2203 - Step 7 projects.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Step 7 projects. 35.2203 Section 35... STATE AND LOCAL ASSISTANCE Grants for Construction of Treatment Works § 35.2203 Step 7 projects. (a) Prior to initiating action to acquire real property, a Step 7 grantee shall submit for Regional...
Adaptive temporal refinement in injection molding
NASA Astrophysics Data System (ADS)
Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek
2018-05-01
Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.
Socioeconomic Indicators for Small Towns. Small Town Strategy.
ERIC Educational Resources Information Center
Oregon State Univ., Corvallis. Cooperative Extension Service.
Prepared to help small towns assess community population and economic trends, this publication provides a step-by-step guide for establishing an on-going local data collection system, which is based on four local indicators and will provide accurate, up-to-date estimates of population, family income, and gross sales within a town's trade area. The…
For the Common Good. A Guide for Developing Local Interagency Linkage Teams.
ERIC Educational Resources Information Center
Imel, Susan
Developed from the Ohio At-Risk Linkage Team experiences, this guide assists local communities in organizing and strengthening effective collaborative interagency linkage teams for at-risk youth and adults. The guide proposes a series of steps, poses a number of questions relating to each step, and provides information about additional resources.…
ERIC Educational Resources Information Center
Skelding, Mark; Kemple, Martin; Kiefer, Joseph
This guide is designed to take teachers through a step-by-step process for developing an integrated, standards-based curriculum that focuses on the stories, history, folkways, and agrarian traditions of the local community. Such a place-based curriculum helps students to become culturally literate, makes learning relevant and engaging, draws on…
Jacquez, Geoffrey M; Shi, Chen; Meliker, Jaymie R
2015-01-01
In case control studies disease risk not explained by the significant risk factors is the unexplained risk. Considering unexplained risk for specific populations, places and times can reveal the signature of unidentified risk factors and risk factors not fully accounted for in the case-control study. This potentially can lead to new hypotheses regarding disease causation. Global, local and focused Q-statistics are applied to data from a population-based case-control study of 11 southeast Michigan counties. Analyses were conducted using both year- and age-based measures of time. The analyses were adjusted for arsenic exposure, education, smoking, family history of bladder cancer, occupational exposure to bladder cancer carcinogens, age, gender, and race. Significant global clustering of cases was not found. Such a finding would indicate large-scale clustering of cases relative to controls through time. However, highly significant local clusters were found in Ingham County near Lansing, in Oakland County, and in the City of Jackson, Michigan. The Jackson City cluster was observed in working-ages and is thus consistent with occupational causes. The Ingham County cluster persists over time, suggesting a broad-based geographically defined exposure. Focused clusters were found for 20 industrial sites engaged in manufacturing activities associated with known or suspected bladder cancer carcinogens. Set-based tests that adjusted for multiple testing were not significant, although local clusters persisted through time and temporal trends in probability of local tests were observed. Q analyses provide a powerful tool for unpacking unexplained disease risk from case-control studies. This is particularly useful when the effect of risk factors varies spatially, through time, or through both space and time. For bladder cancer in Michigan, the next step is to investigate causal hypotheses that may explain the excess bladder cancer risk localized to areas of Oakland and Ingham counties, and to the City of Jackson.
Solution of elliptic PDEs by fast Poisson solvers using a local relaxation factor
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1986-01-01
A large class of two- and three-dimensional, nonseparable elliptic partial differential equations (PDEs) is presently solved by means of novel one-step (D'Yakanov-Gunn) and two-step (accelerated one-step) iterative procedures, using a local, discrete Fourier analysis. In addition to being easily implemented and applicable to a variety of boundary conditions, these procedures are found to be computationally efficient on the basis of the results of numerical comparison with other established methods, which lack the present one's: (1) insensitivity to grid cell size and aspect ratio, and (2) ease of convergence rate estimation by means of the coefficient of the PDE being solved. The two-step procedure is numerically demonstrated to outperform the one-step procedure in the case of PDEs with variable coefficients.
A Data Parallel Multizone Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)
1995-01-01
We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.
Lokerse, Wouter J M; Bolkestein, Michiel; Ten Hagen, Timo L M; de Jong, Marion; Eggermont, Alexander M M; Grüll, Holger; Koning, Gerben A
2016-01-01
Doxorubicin (Dox) loaded thermosensitive liposomes (TSLs) have shown promising results for hyperthermia-induced local drug delivery to solid tumors. Typically, the tumor is heated to hyperthermic temperatures (41-42 °C), which induced intravascular drug release from TSLs within the tumor tissue leading to high local drug concentrations (1-step delivery protocol). Next to providing a trigger for drug release, hyperthermia (HT) has been shown to be cytotoxic to tumor tissue, to enhance chemosensitivity and to increase particle extravasation from the vasculature into the tumor interstitial space. The latter can be exploited for a 2-step delivery protocol, where HT is applied prior to i.v. TSL injection to enhance tumor uptake, and after 4 hours waiting time for a second time to induce drug release. In this study, we compare the 1- and 2-step delivery protocols and investigate which factors are of importance for a therapeutic response. In murine B16 melanoma and BFS-1 sarcoma cell lines, HT induced an enhanced Dox uptake in 2D and 3D models, resulting in enhanced chemosensitivity. In vivo, therapeutic efficacy studies were performed for both tumor models, showing a therapeutic response for only the 1-step delivery protocol. SPECT/CT imaging allowed quantification of the liposomal accumulation in both tumor models at physiological temperatures and after a HT treatment. A simple two compartment model was used to derive respective rates for liposomal uptake, washout and retention, showing that the B16 model has a twofold higher liposomal uptake compared to the BFS-1 tumor. HT increases uptake and retention of liposomes in both tumors models by the same factor of 1.66 maintaining the absolute differences between the two models. Histology showed that HT induced apoptosis, blood vessel integrity and interstitial structures are important factors for TSL accumulation in the investigated tumor types. However, modeling data indicated that the intraliposomal Dox fraction did not reach therapeutic relevant concentrations in the tumor tissue in a 2-step delivery protocol due to the leaking of the drug from its liposomal carrier providing an explanation for the observed lack of efficacy.
Algorithms for Determining Physical Responses of Structures Under Load
NASA Technical Reports Server (NTRS)
Richards, W. Lance; Ko, William L.
2012-01-01
Ultra-efficient real-time structural monitoring algorithms have been developed to provide extensive information about the physical response of structures under load. These algorithms are driven by actual strain data to measure accurately local strains at multiple locations on the surface of a structure. Through a single point load calibration test, these structural strains are then used to calculate key physical properties of the structure at each measurement location. Such properties include the structure s flexural rigidity (the product of the structure's modulus of elasticity, and its moment of inertia) and the section modulus (the moment of inertia divided by the structure s half-depth). The resulting structural properties at each location can be used to determine the structure s bending moment, shear, and structural loads in real time while the structure is in service. The amount of structural information can be maximized through the use of highly multiplexed fiber Bragg grating technology using optical time domain reflectometry and optical frequency domain reflectometry, which can provide a local strain measurement every 10 mm on a single hair-sized optical fiber. Since local strain is used as input to the algorithms, this system serves multiple purposes of measuring strains and displacements, as well as determining structural bending moment, shear, and loads for assessing real-time structural health. The first step is to install a series of strain sensors on the structure s surface in such a way as to measure bending strains at desired locations. The next step is to perform a simple ground test calibration. For a beam of length l (see example), discretized into n sections and subjected to a tip load of P that places the beam in bending, the flexural rigidity of the beam can be experimentally determined at each measurement location x. The bending moment at each station can then be determined for any general set of loads applied during operation.
NASA Technical Reports Server (NTRS)
Tan, Benjamin
1995-01-01
Using thermochromatic liquid crystal to measure surface temperature, an automated transient method with time-varying free-stream temperature is developed to determine local heat transfer coefficients. By allowing the free-stream temperature to vary with time, the need for complicated mechanical components to achieve a step temperature change is eliminated, and by using the thermochromatic liquid crystals as temperature indicators, the labor intensive task of installing many thermocouples is omitted. Bias associated with human perception of the transition of the thermochromatic liquid crystal is eliminated by using a high speed digital camera and a computer. The method is validated by comparisons with results obtained by the steady-state method for a circular Jet impinging on a flat plate. Several factors affecting the accuracy of the method are evaluated.
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
Qdot Labeled Actin Super Resolution Motility Assay Measures Low Duty Cycle Muscle Myosin Step-Size
Wang, Yihua; Ajtai, Katalin; Burghardt, Thomas P.
2013-01-01
Myosin powers contraction in heart and skeletal muscle and is a leading target for mutations implicated in inheritable muscle diseases. During contraction, myosin transduces ATP free energy into the work of muscle shortening against resisting force. Muscle shortening involves relative sliding of myosin and actin filaments. Skeletal actin filaments were fluorescence labeled with a streptavidin conjugate quantum dot (Qdot) binding biotin-phalloidin on actin. Single Qdot’s were imaged in time with total internal reflection fluorescence microscopy then spatially localized to 1-3 nanometers using a super-resolution algorithm as they translated with actin over a surface coated with skeletal heavy meromyosin (sHMM) or full length β-cardiac myosin (MYH7). Average Qdot-actin velocity matches measurements with rhodamine-phalloidin labeled actin. The sHMM Qdot-actin velocity histogram contains low velocity events corresponding to actin translation in quantized steps of ~5 nm. The MYH7 velocity histogram has quantized steps at 3 and 8 nm in addition to 5 nm, and, larger compliance than sHMM depending on MYH7 surface concentration. Low duty cycle skeletal and cardiac myosin present challenges for a single molecule assay because actomyosin dissociates quickly and the freely moving element diffuses away. The in vitro motility assay has modestly more actomyosin interactions and methylcellulose inhibited diffusion to sustain the complex while preserving a subset of encounters that do not overlap in time on a single actin filament. A single myosin step is isolated in time and space then characterized using super-resolution. The approach provides quick, quantitative, and inexpensive step-size measurement for low duty cycle muscle myosin. PMID:23383646
Parallel algorithms for boundary value problems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are two fold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed.
The YMCA/Steps Community Collaboratives, 2004-2008.
Adamson, Katie; Shepard, Dennis; Easton, Alyssa; Jones, Ellen S
2009-07-01
Since the YMCA/Steps National Partnership began in 2004, the collaborative approach has built local synergy, linked content experts, and engaged national partners to concentrate on some of the most pressing health issues in the United States. Together, national and local partners used evidence-based public health programs to address risk factors such as poor nutrition, physical inactivity, and tobacco use. This article describes the YMCA/Steps National Partnership and focuses on the experiences and achievements of the YMCA/Steps Community Collaboratives, conducted with technical assistance from the National Association of Chronic Disease Directors between 2004 and 2008. We introduce some of the fundamental concepts underlying the partnership's success and share evaluation results.
Mishra, Pankaj; Li, Ruijiang; Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.
2014-01-01
Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient. PMID:25086523
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Pankaj, E-mail: pankaj.mishra@varian.com; Mak, Raymond H.; Rottmann, Joerg
2014-08-15
Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculatedmore » through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient.« less
Adsorption of xenon on vicinal copper and platinum surfaces
NASA Astrophysics Data System (ADS)
Baker, Layton
The adsorption of xenon was studied on Cu(111), Cu(221), Cu(643) and on Pt(111), Pt(221), and Pt(531) using low energy electron diffraction (LEED), temperature programmed desorption (TPD) of xenon, and ultraviolet photoemission of adsorbed xenon (PAX). These experiments were performed to study the atomic and electronic structure of stepped and step-kinked, chiral metal surfaces. Xenon TPD and PAX were performed on each surface in an attempt to titrate terrace, step edge, and kink adsorption sites by adsorption energetics (TPD) and local work function differences (PAX). Due to the complex behavior of xenon on the vicinal copper and platinum metal surfaces, adsorption sites on these surfaces could not be adequately titrated by xenon TPD. On Cu(221) and Cu(643), xenon desorption from step adsorption sites was not apparent leading to the conclusion that the energy difference between terrace and step adsorption is minuscule. On Pt(221) and Pt(531), xenon TPD indicated that xenon prefers to bond at step edges and that the xenon-xenon interaction at step edges in repulsive but no further indication of step-kink adsorption was observed. The Pt(221) and Pt(531) TPD spectra indicated that the xenon overlayer undergoes strong compression near monolayer coverage on these surfaces due to repulsion between step-edge adsorbed xenon and other encroaching xenon atoms. The PAX experiments on the copper and platinum surfaces demonstrated that the step adsorption sites have lower local work functions than terrace adsorption sites and that higher step density leads to a larger separation in the local work function of terrace and step adsorption sites. The PAX spectra also indicated that, for all surfaces studied at 50--70 K, step adsorption is favored at low coverage but the step sites are not saturated until monolayer coverage is reached; this observation is due to the large entropy difference between terrace and step adsorption states and to repulsive interactions between xenon atoms adsorbed at step edges (on the platinum surfaces). The results herein provide several novel observations regarding the adsorptive behavior of xenon on vicinal copper and platinum surfaces.
Distributed resource allocation under communication constraints
NASA Astrophysics Data System (ADS)
Dodin, Pierre; Nimier, Vincent
2001-03-01
This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.
Modeling multivariate time series on manifolds with skew radial basis functions.
Jamshidi, Arta A; Kirby, Michael J
2011-01-01
We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.
Syrett, Camille M.; Sindhava, Vishal; Hodawadekar, Suchita; Myles, Arpita; Liang, Guanxiang; Zhang, Yue; Nandi, Satabdi; Cancro, Michael; Atchison, Michael
2017-01-01
X-chromosome inactivation (XCI) in female lymphocytes is uniquely regulated, as the inactive X (Xi) chromosome lacks localized Xist RNA and heterochromatin modifications. Epigenetic profiling reveals that Xist RNA is lost from the Xi at the pro-B cell stage and that additional heterochromatic modifications are gradually lost during B cell development. Activation of mature B cells restores Xist RNA and heterochromatin to the Xi in a dynamic two-step process that differs in timing and pattern, depending on the method of B cell stimulation. Finally, we find that DNA binding domain of YY1 is necessary for XCI in activated B cells, as ex-vivo YY1 deletion results in loss of Xi heterochromatin marks and up-regulation of X-linked genes. Ectopic expression of the YY1 zinc finger domain is sufficient to restore Xist RNA localization during B cell activation. Together, our results indicate that Xist RNA localization is critical for maintaining XCI in female lymphocytes, and that chromatin changes on the Xi during B cell development and the dynamic nature of YY1-dependent XCI maintenance in mature B cells predisposes X-linked immunity genes to reactivation. PMID:28991910
Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search
Song, Kai; Liu, Qi; Wang, Qi
2011-01-01
Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401
Concordance cosmology without dark energy
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Dobos, László; Beck, Róbert; Szapudi, István; Csabai, István
2017-07-01
According to the separate universe conjecture, spherically symmetric sub-regions in an isotropic universe behave like mini-universes with their own cosmological parameters. This is an excellent approximation in both Newtonian and general relativistic theories. We estimate local expansion rates for a large number of such regions, and use a scale parameter calculated from the volume-averaged increments of local scale parameters at each time step in an otherwise standard cosmological N-body simulation. The particle mass, corresponding to a coarse graining scale, is an adjustable parameter. This mean field approximation neglects tidal forces and boundary effects, but it is the first step towards a non-perturbative statistical estimation of the effect of non-linear evolution of structure on the expansion rate. Using our algorithm, a simulation with an initial Ωm = 1 Einstein-de Sitter setting closely tracks the expansion and structure growth history of the Λ cold dark matter (ΛCDM) cosmology. Due to small but characteristic differences, our model can be distinguished from the ΛCDM model by future precision observations. Moreover, our model can resolve the emerging tension between local Hubble constant measurements and the Planck best-fitting cosmology. Further improvements to the simulation are necessary to investigate light propagation and confirm full consistency with cosmic microwave background observations.
Modified unified kinetic scheme for all flow regimes.
Liu, Sha; Zhong, Chengwen
2012-06-01
A modified unified kinetic scheme for the prediction of fluid flow behaviors in all flow regimes is described. The time evolution of macrovariables at the cell interface is calculated with the idea that both free transport and collision mechanisms should be considered. The time evolution of macrovariables is obtained through the conservation constraints. The time evolution of local Maxwellian distribution is obtained directly through the one-to-one mapping from the evolution of macrovariables. These improvements provide more physical realities in flow behaviors and more accurate numerical results in all flow regimes especially in the complex transition flow regime. In addition, the improvement steps introduce no extra computational complexity.
Time-dependent perpendicular fluctuations in the driven lattice Lorentz gas
NASA Astrophysics Data System (ADS)
Leitmann, Sebastian; Schwab, Thomas; Franosch, Thomas
2018-02-01
We present results for the fluctuations of the displacement of a tracer particle on a planar lattice pulled by a step force in the presence of impenetrable, immobile obstacles. The fluctuations perpendicular to the applied force are evaluated exactly in first order of the obstacle density for arbitrarily strong pulling and all times. The complex time-dependent behavior is analyzed in terms of the diffusion coefficient, local exponent, and the non-Skellam parameter, which quantifies deviations from the dynamics on the lattice in the absence of obstacles. The non-Skellam parameter along the force is analyzed in terms of an asymptotic model and reveals a power-law growth for intermediate times.
40 CFR 35.2202 - Step 2+3 projects.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Step 2+3 projects. 35.2202 Section 35... STATE AND LOCAL ASSISTANCE Grants for Construction of Treatment Works § 35.2202 Step 2+3 projects. (a) Prior to initiating action to acquire eligible real property, a Step 2+3 grantee shall submit for...
Einterz, E M; Younge, O; Hadi, C
2018-06-01
To determine, subsequent to the expansion of a county health department's refugee screening process from a one-step to a two-step process, the change in early loss to follow-up and time to initiation of treatment of new refugees with latent tuberculosis infection (LTBI). Quasi-experimental, quantitative. Review of patient medical records. Among 384 refugees who met the case definition of LTBI without prior tuberculosis (TB) classification, the number of cases lost to early follow-up fell from 12.5% to 0% after expansion to a two-step screening process. The average interval between in-country arrival and initiation of LTBI treatment was shortened by 41.4%. The addition of a second step to the refugee screening process was correlated with significant improvements in the county's success in tracking and treating cases of LTBI in refugees. Given the disproportionate importance of foreign-born cases of LTBI to the incidence of TB disease in low-incidence countries, these improvements could have a substantial impact on overall TB control, and the process described could serve as a model for other local health department refugee screening programs. Copyright © 2018 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
ERIC Educational Resources Information Center
Rice, Eric; And Others
This guidebook focuses on the first of five steps included in a planning system for improving local secondary and postsecondary program and facilities accessibility: identifying barriers. The first five sections of the booklet are comprised of self-instructional descriptions of five needs-assessment procedures that can be used to identify…
ERIC Educational Resources Information Center
Rice, Eric; And Others
This guidebook focuses on the third of five steps included in a planning system for improving local secondary and postsecondary program and facilities accessibility: generating strategies. The guidebook is comprised of four sections, each describing a specific technique for generating strategies. Techniques presented are (1) nominal group…
Head movement compensation in real-time magnetoencephalographic recordings.
Little, Graham; Boe, Shaun; Bardouille, Timothy
2014-01-01
Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.
Using Digital Media Advertising in Early Psychosis Intervention.
Birnbaum, Michael L; Garrett, Chantel; Baumel, Amit; Scovel, Maria; Rizvi, Asra F; Muscat, Whitney; Kane, John M
2017-11-01
Identifying and engaging youth with early-stage psychotic disorders in order to facilitate timely treatment initiation remains a major public health challenge. Although advertisers routinely use the Internet to directly target consumers, limited efforts have focused on applying available technology to proactively encourage help-seeking in the mental health community. This study explores how one might take advantage of Google AdWords in order to reach prospective patients with early psychosis. A landing page was developed with the primary goal of encouraging help-seeking individuals in New York City to contact their local early psychosis intervention clinic. In order to provide the best opportunity to reach the intended audience, Google AdWords was utilized to link more than 2,000 selected search terms to strategically placed landing page advertisements. The campaign ran for 14 weeks between April 11 and July 18, 2016 and had a total budget of $1,427. The ads appeared 191,313 times and were clicked on 4,350 times, at a per-click cost of $.33. Many users took additional help-seeking steps, including obtaining psychosis-specific information/education (44%), completing a psychosis self-screener (15%), and contacting the local early treatment program (1%). Digital ads appear to be a reasonable and cost-effective method to reach individuals who are searching for behavioral health information online. More research is needed to better understand the many complex steps between online search inquiries and making first clinical contact.
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
Step Density Profiles in Localized Chains
NASA Astrophysics Data System (ADS)
De Roeck, Wojciech; Dhar, Abhishek; Huveneers, François; Schütz, Marius
2017-06-01
We consider two types of strongly disordered one-dimensional Hamiltonian systems coupled to baths (energy or particle reservoirs) at the boundaries: strongly disordered quantum spin chains and disordered classical harmonic oscillators. These systems are believed to exhibit localization, implying in particular that the conductivity decays exponentially in the chain length L. We ask however for the profile of the (very slowly) transported quantity in the steady state. We find that this profile is a step-function, jumping in the middle of the chain from the value set by the left bath to the value set by the right bath. This is confirmed by numerics on a disordered quantum spin chain of 9 spins and on much longer chains of harmonic oscillators. From theoretical arguments, we find that the width of the step grows not faster than √{L}, and we confirm this numerically for harmonic oscillators. In this case, we also observe a drastic breakdown of local equilibrium at the step, resulting in a heavily oscillating temperature profile.
Efficient QoS-aware Service Composition
NASA Astrophysics Data System (ADS)
Alrifai, Mohammad; Risse, Thomas
Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.
3-D Localization Method for a Magnetically Actuated Soft Capsule Endoscope and Its Applications
Yim, Sehyuk; Sitti, Metin
2014-01-01
In this paper, we present a 3-D localization method for a magnetically actuated soft capsule endoscope (MASCE). The proposed localization scheme consists of three steps. First, MASCE is oriented to be coaxially aligned with an external permanent magnet (EPM). Second, MASCE is axially contracted by the enhanced magnetic attraction of the approaching EPM. Third, MASCE recovers its initial shape by the retracting EPM as the magnetic attraction weakens. The combination of the estimated direction in the coaxial alignment step and the estimated distance in the shape deformation (recovery) step provides the position of MASCE in 3-D. It is experimentally shown that the proposed localization method could provide 2.0–3.7 mm of distance error in 3-D. This study also introduces two new applications of the proposed localization method. First, based on the trace of contact points between the MASCE and the surface of the stomach, the 3-D geometrical model of a synthetic stomach was reconstructed. Next, the relative tissue compliance at each local contact point in the stomach was characterized by measuring the local tissue deformation at each point due to the preloading force. Finally, the characterized relative tissue compliance parameter was mapped onto the geometrical model of the stomach toward future use in disease diagnosis. PMID:25383064
ERIC Educational Resources Information Center
Altstadt, David
2011-01-01
The U.S. economy will emerge from the Great Recession radically transformed from what it was a generation ago. Changes are afoot affecting which occupations and industry sectors will produce employment growth, as well as what education credentials, competencies, and skills will be required to do those jobs. Community colleges already take steps to…
A special purpose knowledge-based face localization method
NASA Astrophysics Data System (ADS)
Hassanat, Ahmad; Jassim, Sabah
2008-04-01
This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.
Brody, Samuel D; Zahran, Sammy; Highfield, Wesley E; Bernhardt, Sarah P; Vedlitz, Arnold
2009-06-01
Floods continue to inflict the most damage upon human communities among all natural hazards in the United States. Because localized flooding tends to be spatially repetitive over time, local decisionmakers often have an opportunity to learn from previous events and make proactive policy adjustments to reduce the adverse effects of a subsequent storm. Despite the importance of understanding the degree to which local jurisdictions learn from flood risks and under what circumstances, little if any empirical, longitudinal research has been conducted along these lines. This article addresses the research gap by examining the change in local flood mitigation policies in Florida from 1999 to 2005. We track 18 different mitigation activities organized into four series of activities under the Federal Emergency Management Agency's (FEMA) Community Rating System (CRS) for every local jurisdiction in Florida participating in the FEMA program on a yearly time step. We then identify the major factors contributing to policy changes based on CRS scores over the seven-year study period. Using multivariate statistical models to analyze both natural and social science data, we isolate the effects of several variables categorized into the following groups: hydrologic conditions, flood disaster history, socioeconomic and human capital controls. Results indicate that local jurisdictions do in fact learn from histories of flood risk and this process is expedited under specific conditions.
Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.
Sequence-dependent response of DNA to torsional stress: a potential biological regulation mechanism.
Reymer, Anna; Zakrzewska, Krystyna; Lavery, Richard
2018-02-28
Torsional restraints on DNA change in time and space during the life of the cell and are an integral part of processes such as gene expression, DNA repair and packaging. The mechanical behavior of DNA under torsional stress has been studied on a mesoscopic scale, but little is known concerning its response at the level of individual base pairs and the effects of base pair composition. To answer this question, we have developed a geometrical restraint that can accurately control the total twist of a DNA segment during all-atom molecular dynamics simulations. By applying this restraint to four different DNA oligomers, we are able to show that DNA responds to both under- and overtwisting in a very heterogeneous manner. Certain base pair steps, in specific sequence environments, are able to absorb most of the torsional stress, leaving other steps close to their relaxed conformation. This heterogeneity also affects the local torsional modulus of DNA. These findings suggest that modifying torsional stress on DNA could act as a modulator for protein binding via the heterogeneous changes in local DNA structure.
Sequence-dependent response of DNA to torsional stress: a potential biological regulation mechanism
Reymer, Anna; Zakrzewska, Krystyna; Lavery, Richard
2018-01-01
Abstract Torsional restraints on DNA change in time and space during the life of the cell and are an integral part of processes such as gene expression, DNA repair and packaging. The mechanical behavior of DNA under torsional stress has been studied on a mesoscopic scale, but little is known concerning its response at the level of individual base pairs and the effects of base pair composition. To answer this question, we have developed a geometrical restraint that can accurately control the total twist of a DNA segment during all-atom molecular dynamics simulations. By applying this restraint to four different DNA oligomers, we are able to show that DNA responds to both under- and overtwisting in a very heterogeneous manner. Certain base pair steps, in specific sequence environments, are able to absorb most of the torsional stress, leaving other steps close to their relaxed conformation. This heterogeneity also affects the local torsional modulus of DNA. These findings suggest that modifying torsional stress on DNA could act as a modulator for protein binding via the heterogeneous changes in local DNA structure. PMID:29267977
2MASS Catalog Server Kit Version 2.1
NASA Astrophysics Data System (ADS)
Yamauchi, C.
2013-10-01
The 2MASS Catalog Server Kit is open source software for use in easily constructing a high performance search server for important astronomical catalogs. This software utilizes the open source RDBMS PostgreSQL, therefore, any users can setup the database on their local computers by following step-by-step installation guide. The kit provides highly optimized stored functions for positional searchs similar to SDSS SkyServer. Together with these, the powerful SQL environment of PostgreSQL will meet various user's demands. We released 2MASS Catalog Server Kit version 2.1 in 2012 May, which supports the latest WISE All-Sky catalog (563,921,584 rows) and 9 major all-sky catalogs. Local databases are often indispensable for observatories with unstable or narrow-band networks or severe use, such as retrieving large numbers of records within a small period of time. This software is the best for such purposes, and increasing supported catalogs and improvements of version 2.1 can cover a wider range of applications including advanced calibration system, scientific studies using complicated SQL queries, etc. Official page: http://www.ir.isas.jaxa.jp/~cyamauch/2masskit/
NASA Technical Reports Server (NTRS)
Bon, Bruce; Seraji, Homayoun
2007-01-01
Rover Graphical Simulator (RGS) is a package of software that generates images of the motion of a wheeled robotic exploratory vehicle (rover) across terrain that includes obstacles and regions of varying traversability. The simulated rover moves autonomously, utilizing reasoning and decision-making capabilities of a fuzzy-logic navigation strategy to choose its path from an initial to a final state. RGS provides a graphical user interface for control and monitoring of simulations. The numerically simulated motion is represented as discrete steps with a constant time interval between updates. At each simulation step, a dot is placed at the old rover position and a graphical symbol representing the rover is redrawn at the new, updated position. The effect is to leave a trail of dots depicting the path traversed by the rover, the distances between dots being proportional to the local speed. Obstacles and regions of low traversability are depicted as filled circles, with buffer zones around them indicated by enclosing circles. The simulated robot is equipped with onboard sensors that can detect regional terrain traversability and local obstacles out to specified ranges. RGS won the NASA Group Achievement Award in 2002.
A continuous arc delivery optimization algorithm for CyberKnife m6.
Kearney, Vasant; Descovich, Martina; Sudhyadhom, Atchar; Cheung, Joey P; McGuinness, Christopher; Solberg, Timothy D
2018-06-01
This study aims to reduce the delivery time of CyberKnife m6 treatments by allowing for noncoplanar continuous arc delivery. To achieve this, a novel noncoplanar continuous arc delivery optimization algorithm was developed for the CyberKnife m6 treatment system (CyberArc-m6). CyberArc-m6 uses a five-step overarching strategy, in which an initial set of beam geometries is determined, the robotic delivery path is calculated, direct aperture optimization is conducted, intermediate MLC configurations are extracted, and the final beam weights are computed for the continuous arc radiation source model. This algorithm was implemented on five prostate and three brain patients, previously planned using a conventional step-and-shoot CyberKnife m6 delivery technique. The dosimetric quality of the CyberArc-m6 plans was assessed using locally confined mutual information (LCMI), conformity index (CI), heterogeneity index (HI), and a variety of common clinical dosimetric objectives. Using conservative optimization tuning parameters, CyberArc-m6 plans were able to achieve an average CI difference of 0.036 ± 0.025, an average HI difference of 0.046 ± 0.038, and an average LCMI of 0.920 ± 0.030 compared with the original CyberKnife m6 plans. Including a 5 s per minute image alignment time and a 5-min setup time, conservative CyberArc-m6 plans achieved an average treatment delivery speed up of 1.545x ± 0.305x compared with step-and-shoot plans. The CyberArc-m6 algorithm was able to achieve dosimetrically similar plans compared to their step-and-shoot CyberKnife m6 counterparts, while simultaneously reducing treatment delivery times. © 2018 American Association of Physicists in Medicine.
Zhang, Fan; Allen, Andrew J.; Levine, Lyle E.; Vaudin, Mark D.; Skrtic, Drago; Antonucci, Joseph M.; Hoffman, Kathleen M.; Giuseppetti, Anthony A.; Ilavsky, Jan
2014-01-01
Objective To investigate the complex structural and dynamical conversion process of the amorphous-calcium-phosphate (ACP) -to-apatite transition in ACP based dental composite materials. Methods Composite disks were prepared using zirconia hybridized ACP fillers (0.4 mass fraction) and photo-activated Bis-GMA/TEGDMA resin (0.6 mass fraction). We performed an investigation of the solution-mediated ACP-to-apatite conversion mechanism in controlled acidic aqueous environment with in situ ultra-small angle X-ray scattering based coherent X-ray photon correlation spectroscopy and ex situ X-ray diffraction, as well as other complementary techniques. Results We established that the ACP-to-apatite conversion in ACP composites is a two-step process, owing to the sensitivity to local structural changes provided by coherent X-rays. Initially, ACP undergoes a local microstructural rearrangement without losing its amorphous character. We established the catalytic role of the acid and found the time scale of this rearrangement strongly depends on the pH of the solution, which agrees with previous findings about ACP without the polymer matrix being present. In the second step, ACP is converted to an apatitic form with the crystallinity of the formed crystallites being poor. Separately, we also confirmed that in the regular Zr-modified ACP the rate of ACP conversion to hydroxyapatite is slowed significantly compared to unmodified ACP, which is beneficial for targeted slow release of functional calcium and phosphate ions from dental composite materials. Significance For the first time, we were able to follow the complete solution-mediated transition process from ACP to apatite in this class of dental composites in a controlled aqueous environment. A two-step process, suggested previously, was conclusively identified. PMID:25082155
NASA Astrophysics Data System (ADS)
Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank
2016-01-01
Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.
Data assimilation of citizen collected information for real-time flood hazard mapping
NASA Astrophysics Data System (ADS)
Sayama, T.; Takara, K. T.
2017-12-01
Many studies in data assimilation in hydrology have focused on the integration of satellite remote sensing and in-situ monitoring data into hydrologic or land surface models. For flood predictions also, recent studies have demonstrated to assimilate remotely sensed inundation information with flood inundation models. In actual flood disaster situations, citizen collected information including local reports by residents and rescue teams and more recently tweets via social media also contain valuable information. The main interest of this study is how to effectively use such citizen collected information for real-time flood hazard mapping. Here we propose a new data assimilation technique based on pre-conducted ensemble inundation simulations and update inundation depth distributions sequentially when local data becomes available. The propose method is composed by the following two-steps. The first step is based on weighting average of preliminary ensemble simulations, whose weights are updated by Bayesian approach. The second step is based on an optimal interpolation, where the covariance matrix is calculated from the ensemble simulations. The proposed method was applied to case studies including an actual flood event occurred. It considers two situations with more idealized one by assuming continuous flood inundation depth information is available at multiple locations. The other one, which is more realistic case during such a severe flood disaster, assumes uncertain and non-continuous information is available to be assimilated. The results show that, in the first idealized situation, the large scale inundation during the flooding was estimated reasonably with RMSE < 0.4 m in average. For the second more realistic situation, the error becomes larger (RMSE 0.5 m) and the impact of the optimal interpolation becomes comparatively less effective. Nevertheless, the applications of the proposed data assimilation method demonstrated a high potential of this method for assimilating citizen collected information for real-time flood hazard mapping in the future.
Multiple-Beam Detection of Fast Transient Radio Sources
NASA Technical Reports Server (NTRS)
Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.
2011-01-01
A method has been designed for using multiple independent stations to discriminate fast transient radio sources from local anomalies, such as antenna noise or radio frequency interference (RFI). This can improve the sensitivity of incoherent detection for geographically separated stations such as the very long baseline array (VLBA), the future square kilometer array (SKA), or any other coincident observations by multiple separated receivers. The transients are short, broadband pulses of radio energy, often just a few milliseconds long, emitted by a variety of exotic astronomical phenomena. They generally represent rare, high-energy events making them of great scientific value. For RFI-robust adaptive detection of transients, using multiple stations, a family of algorithms has been developed. The technique exploits the fact that the separated stations constitute statistically independent samples of the target. This can be used to adaptively ignore RFI events for superior sensitivity. If the antenna signals are independent and identically distributed (IID), then RFI events are simply outlier data points that can be removed through robust estimation such as a trimmed or Winsorized estimator. The alternative "trimmed" estimator is considered, which excises the strongest n signals from the list of short-beamed intensities. Because local RFI is independent at each antenna, this interference is unlikely to occur at many antennas on the same step. Trimming the strongest signals provides robustness to RFI that can theoretically outperform even the detection performance of the same number of antennas at a single site. This algorithm requires sorting the signals at each time step and dispersion measure, an operation that is computationally tractable for existing array sizes. An alternative uses the various stations to form an ensemble estimate of the conditional density function (CDF) evaluated at each time step. Both methods outperform standard detection strategies on a test sequence of VLBA data, and both are efficient enough for deployment in real-time, online transient detection applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supardiyono; Santosa, Bagus Jaya; Physics Department, Faculty of Mathematics and Natural Sciences, Sepuluh Nopember Institute of Technology, Surabaya
A one-dimensional (1-D) velocity model and station corrections for the West Java zone were computed by inverting P-wave arrival times recorded on a local seismic network of 14 stations. A total of 61 local events with a minimum of 6 P-phases, rms 0.56 s and a maximum gap of 299 Degree-Sign were selected. Comparison with previous earthquake locations shows an improvement for the relocated earthquakes. Tests were carried out to verify the robustness of inversion results in order to corroborate the conclusions drawn out from our reasearch. The obtained minimum 1-D velocity model can be used to improve routine earthquakemore » locations and represents a further step toward more detailed seismotectonic studies in this area of West Java.« less
Development of the automated bunker door by using a microcontroller-system
NASA Astrophysics Data System (ADS)
Ahmad, M. A.; Leo, K. W.; Mohamad, G. H. P.; Ahmad, A.; Hashim, S. A.; Chulan, R. M.; Baijan, A. H.
2018-01-01
The new low energy electron beam accelerator bunker was designed and built locally to allocate a 500 keV electron beam accelerator at Block 43T in Malaysian Nuclear Agency. This bunker is equipped with a locally made radiation shielding door of 10 tons. Originally, this door is moving manually by a wheel and fitted with a gear system. However, it is still heavy and need longer time to operate it manually. To overcome those issues, a new automated control system has been designed and developed. In this paper, the complete steps and design of automated control system based on the microcontroller (PIC16F84A) is described.
40 CFR 35.935-4 - Step 2+3 projects.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Step 2+3 projects. 35.935-4 Section 35... STATE AND LOCAL ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.935-4 Step 2+3 projects. A grantee which has received step 2=3 grant assistance must make submittals required by...
Oxidation Behavior of Carbon Fiber-Reinforced Composites
NASA Technical Reports Server (NTRS)
Sullivan, Roy M.
2008-01-01
OXIMAP is a numerical (FEA-based) solution tool capable of calculating the carbon fiber and fiber coating oxidation patterns within any arbitrarily shaped carbon silicon carbide composite structure as a function of time, temperature, and the environmental oxygen partial pressure. The mathematical formulation is derived from the mechanics of the flow of ideal gases through a chemically reacting, porous solid. The result of the formulation is a set of two coupled, non-linear differential equations written in terms of the oxidant and oxide partial pressures. The differential equations are solved simultaneously to obtain the partial vapor pressures of the oxidant and oxides as a function of the spatial location and time. The local rate of carbon oxidation is determined at each time step using the map of the local oxidant partial vapor pressure along with the Arrhenius rate equation. The non-linear differential equations are cast into matrix equations by applying the Bubnov-Galerkin weighted residual finite element method, allowing for the solution of the differential equations numerically.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.
NASA Technical Reports Server (NTRS)
Powers, S. G.
1978-01-01
The YF-12 airplane was studied to determine the pressure characteristics associated with an aft-facing step in high Reynolds number flow for nominal Mach numbers of 2.20, 2.50, and 2.80. Base pressure coefficients were obtained for three step heights. The surface static pressures ahead of and behind the step were measured for the no-step condition and for each of the step heights. A boundary layer rake was used to determine the local boundary layer conditions. The Reynolds number based on the length of flow ahead of the step was approximately 10 to the 8th power and the ratios of momentum thickness to step height ranged from 0.2 to 1.0. Base pressure coefficients were compared with other available data at similar Mach numbers and at ratios of momentum thickness to step height near 1.0. In addition, the data were compared with base pressure coefficients calculated by a semiempirical prediction method. The base pressure ratios are shown to be a function of Reynolds number based on momentum thickness. Profiles of the surface pressures ahead of and behind the step and the local boundary layer conditions are also presented.
Combining local scaling and global methods to detect soil pore space
NASA Astrophysics Data System (ADS)
Martin-Sotoca, Juan Jose; Saa-Requejo, Antonio; Grau, Juan B.; Tarquis, Ana M.
2017-04-01
The characterization of the spatial distribution of soil pore structures is essential to obtain different parameters that will influence in several models related to water flow and/or microbial growth processes. The first step in pore structure characterization is obtaining soil images that best approximate reality. Over the last decade, major technological advances in X-ray computed tomography (CT) have allowed for the investigation and reconstruction of natural porous media architectures at very fine scales. The subsequent step is delimiting the pore structure (pore space) from the CT soil images applying a thresholding. Many times we could find CT-scan images that show low contrast at the solid-void interface that difficult this step. Different delimitation methods can result in different spatial distributions of pores influencing the parameters used in the models. Recently, new local segmentation method using local greyscale value (GV) concentration variabilities, based on fractal concepts, has been presented. This method creates singularity maps to measure the GV concentration at each point. The C-A method was combined with the singularity map approach (Singularity-CA method) to define local thresholds that can be applied to binarize CT images. Comparing this method with classical methods, such as Otsu and Maximum Entropy, we observed that more pores can be detected mainly due to its ability to amplify anomalous concentrations. However, it delineated many small pores that were incorrect. In this work, we present an improve version of Singularity-CA method that avoid this problem basically combining it with the global classical methods. References Martín-Sotoca, J.J., A. Saa-Requejo, J.B. Grau, A.M. Tarquis. New segmentation method based on fractal properties using singularity maps. Geoderma, 287, 40-53, 2017. Martín-Sotoca, J.J, A. Saa-Requejo, J.B. Grau, A.M. Tarquis. Local 3D segmentation of soil pore space based on fractal properties using singularity maps. Geoderma, http://dx.doi.org/10.1016/j.geoderma.2016.11.029. Torre, Iván G., Juan C. Losada and A.M. Tarquis. Multiscaling properties of soil images. Biosystems Engineering, http://dx.doi.org/10.1016/j.biosystemseng.2016.11.006.
Post-modelling of images from a laser-induced wavy boiling front
NASA Astrophysics Data System (ADS)
Matti, R. S.; Kaplan, A. F. H.
2015-12-01
Processes like laser keyhole welding, remote fusion laser cutting or laser drilling are governed by a highly dynamic wavy boiling front that was recently recorded by ultra-high speed imaging. A new approach has now been established by post-modelling of the high speed images. Based on the image greyscale and on a cavity model the three-dimensional front topology is reconstructed. As a second step the Fresnel absorptivity modulation across the wavy front is calculated, combined with the local projection of the laser beam. Frequency polygons enable additional analysis of the statistical variations of the properties across the front. Trends like shadow formation and time dependency can be studied, locally and for the whole front. Despite strong topology modulation in space and time, for lasers with 1 μm wavelength and steel the absorptivity is bounded to a narrow range of 35-43%, owing to its Fresnel characteristics.
Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm
NASA Astrophysics Data System (ADS)
Küchlin, Stephan; Jenny, Patrick
2018-06-01
Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.
First step in developing SWNT nano-sensor for C17.2 neural stem cells
NASA Astrophysics Data System (ADS)
Ignatova, Tetyana; Pirbhai, Massooma; Chandrasekar, Swetha; Rotkin, Slava V.; Jedlicka, Sabrina
Nanomaterials are widely used for biomedical applications and diagnostics, including as drug and gene delivery agents, imaging objects, and biosensors. As single-wall carbon nanotubes (SWNTs) possess a size similar to intracellular components, including fibrillar proteins and some organelles, the potential for use in a wide variety of intracellular applications is significant. However, implementation of an SWNT based nano-sensor is difficult due to lack of understanding of SWNT-cell interaction on both the cellular and molecular level. In this study, C17.2 neural stem cells have been tested after uptake of SWNTs wrapped with ssDNA over a wide variety of time periods, allowing for broad localization of SWNTs inside of the cells over long time periods. The localization data is being used to develop a predictive model of how, upon uptake of SWNT, the cytoskeleton and other cellular structures of the adherent cells is perturbed.
NASA Astrophysics Data System (ADS)
Kerekes, Ryan A.; Gleason, Shaun S.; Trivedi, Niraj; Solecki, David J.
2010-03-01
Segmentation, tracking, and tracing of neurons in video imagery are important steps in many neuronal migration studies and can be inaccurate and time-consuming when performed manually. In this paper, we present an automated method for tracing the leading and trailing processes of migrating neurons in time-lapse image stacks acquired with a confocal fluorescence microscope. In our approach, we first locate and track the soma of the cell of interest by smoothing each frame and tracking the local maxima through the sequence. We then trace the leading process in each frame by starting at the center of the soma and stepping repeatedly in the most likely direction of the leading process. This direction is found at each step by examining second derivatives of fluorescent intensity along curves of constant radius around the current point. Tracing terminates after a fixed number of steps or when fluorescent intensity drops below a fixed threshold. We evolve the resulting trace to form an improved trace that more closely follows the approximate centerline of the leading process. We apply a similar algorithm to the trailing process of the cell by starting the trace in the opposite direction. We demonstrate our algorithm on two time-lapse confocal video sequences of migrating cerebellar granule neurons (CGNs). We show that the automated traces closely approximate ground truth traces to within 1 or 2 pixels on average. Additionally, we compute line intensity profiles of fluorescence along the automated traces and quantitatively demonstrate their similarity to manually generated profiles in terms of fluorescence peak locations.
Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.
Han, Youkyung; Oh, Jaehong
2018-05-17
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
Anifah, Lilik; Purnama, I Ketut Eddy; Hariadi, Mochamad; Purnomo, Mauridhi Hery
2013-01-01
Localization is the first step in osteoarthritis (OA) classification. Manual classification, however, is time-consuming, tedious, and expensive. The proposed system is designed as decision support system for medical doctors to classify the severity of knee OA. A method has been proposed here to localize a joint space area for OA and then classify it in 4 steps to classify OA into KL-Grade 0, KL-Grade 1, KL-Grade 2, KL-Grade 3 and KL-Grade 4, which are preprocessing, segmentation, feature extraction, and classification. In this proposed system, right and left knee detection was performed by employing the Contrast-Limited Adaptive Histogram Equalization (CLAHE) and the template matching. The Gabor kernel, row sum graph and moment methods were used to localize the junction space area of knee. CLAHE is used for preprocessing step, i.e.to normalize the varied intensities. The segmentation process was conducted using the Gabor kernel, template matching, row sum graph and gray level center of mass method. Here GLCM (contrast, correlation, energy, and homogeinity) features were employed as training data. Overall, 50 data were evaluated for training and 258 data for testing. Experimental results showed the best performance by using gabor kernel with parameters α=8, θ=0, Ψ=[0 π/2], γ=0,8, N=4 and with number of iterations being 5000, momentum value 0.5 and α0=0.6 for the classification process. The run gave classification accuracy rate of 93.8% for KL-Grade 0, 70% for KL-Grade 1, 4% for KL-Grade 2, 10% for KL-Grade 3 and 88.9% for KL-Grade 4.
Anifah, Lilik; Purnama, I Ketut Eddy; Hariadi, Mochamad; Purnomo, Mauridhi Hery
2013-01-01
Localization is the first step in osteoarthritis (OA) classification. Manual classification, however, is time-consuming, tedious, and expensive. The proposed system is designed as decision support system for medical doctors to classify the severity of knee OA. A method has been proposed here to localize a joint space area for OA and then classify it in 4 steps to classify OA into KL-Grade 0, KL-Grade 1, KL-Grade 2, KL-Grade 3 and KL-Grade 4, which are preprocessing, segmentation, feature extraction, and classification. In this proposed system, right and left knee detection was performed by employing the Contrast-Limited Adaptive Histogram Equalization (CLAHE) and the template matching. The Gabor kernel, row sum graph and moment methods were used to localize the junction space area of knee. CLAHE is used for preprocessing step, i.e.to normalize the varied intensities. The segmentation process was conducted using the Gabor kernel, template matching, row sum graph and gray level center of mass method. Here GLCM (contrast, correlation, energy, and homogeinity) features were employed as training data. Overall, 50 data were evaluated for training and 258 data for testing. Experimental results showed the best performance by using gabor kernel with parameters α=8, θ=0, Ψ=[0 π/2], γ=0,8, N=4 and with number of iterations being 5000, momentum value 0.5 and α0=0.6 for the classification process. The run gave classification accuracy rate of 93.8% for KL-Grade 0, 70% for KL-Grade 1, 4% for KL-Grade 2, 10% for KL-Grade 3 and 88.9% for KL-Grade 4. PMID:23525188
Khanlou, Nazilla; Wray, Ron
2014-01-01
A literature review of child and youth resilience with a focus on: definitions and factors of resilience; relationships between resilience, mental health and social outcomes; evidence for resilience promoting interventions; and implications for reducing health inequities. To conduct the review, the first two following steps were conducted iteratively and informed the third step: 1) Review of published peer-review literature since 2000; and 2) Review of grey literature; and 3) Quasi-realist synthesis of evidence. Evidence from three perspectives were examined: i) whether interventions can improve 'resilience' for vulnerable children and youth; ii) whether there is a differential effect among different populations; and, iii) whether there is evidence that resilience interventions 'close the gap' on health and social outcome measures. Definitions of resilience vary as do perspectives on it. We argue for a hybrid approach that recognizes the value of combining multiple theoretical perspectives, epistemologies (positivistic and constructivist/interpretive/critical) in studying resilience. Resilience is: a) a process (rather than a single event), b) a continuum (rather than a binary outcome), and c) likely a global concept with specific dimensions. Individual, family and social environmental factors influence resilience. A social determinants perspective on resilience and mental health is emphasized. Programs and interventions to promoting resilience should be complimentary to public health measures addressing the social determinants of health. A whole community approach to resilience is suggested as a step toward closing the public health policy gap. Local initiatives that stimulate a local transformation process are needed. Recognition of each child's or youth's intersections of gender, lifestage, family resources within the context of their identity markers fits with a localized approach to resilience promotion and, at the same time, requires recognition of the broader determinants of population health.
Kim, Howon; Lin, Shi -Zeng; Graf, Matthias J.; ...
2016-09-08
Local disordered nanostructures in an atomically thick metallic layer on a semiconducting substrate play significant and decisive roles in transport properties of two-dimensional (2D) conductive systems. We measured the electrical conductivity through a step of monoatomic height in a truly microscopic manner by using as a signal the superconducting pair correlation induced by the proximity effect. The transport property across a step of a one-monolayer Pb surface metallic phase, formed on a Si(111) substrate, was evaluated by inducing the pair correlation around the local defect and measuring its response, i.e., the reduced density of states at the Fermi energy usingmore » scanning tunneling microscopy. We found that the step resistance has a significant contribution to the total resistance on a nominally flat surface. Our study also revealed that steps in the 2D metallic layer terminate the propagation of the pair correlation. Furthermore, superconductivity is enhanced between the first surface step and the superconductor–normal-metal interface by reflectionless tunneling when the step is located within a coherence length.« less
Kim, Howon; Lin, Shi-Zeng; Graf, Matthias J; Miyata, Yoshinori; Nagai, Yuki; Kato, Takeo; Hasegawa, Yukio
2016-09-09
Local disordered nanostructures in an atomically thick metallic layer on a semiconducting substrate play significant and decisive roles in transport properties of two-dimensional (2D) conductive systems. We measured the electrical conductivity through a step of monoatomic height in a truly microscopic manner by using as a signal the superconducting pair correlation induced by the proximity effect. The transport property across a step of a one-monolayer Pb surface metallic phase, formed on a Si(111) substrate, was evaluated by inducing the pair correlation around the local defect and measuring its response, i.e., the reduced density of states at the Fermi energy using scanning tunneling microscopy. We found that the step resistance has a significant contribution to the total resistance on a nominally flat surface. Our study also revealed that steps in the 2D metallic layer terminate the propagation of the pair correlation. Superconductivity is enhanced between the first surface step and the superconductor-normal-metal interface by reflectionless tunneling when the step is located within a coherence length.
Boisvert, Maude; Bouchard-Lévesque, Véronique; Fernandes, Sandra
2014-01-01
ABSTRACT Nuclear targeting of capsid proteins (VPs) is important for genome delivery and precedes assembly in the replication cycle of porcine parvovirus (PPV). Clusters of basic amino acids, corresponding to potential nuclear localization signals (NLS), were found only in the unique region of VP1 (VP1up, for VP1 unique part). Of the five identified basic regions (BR), three were important for nuclear localization of VP1up: BR1 was a classic Pat7 NLS, and the combination of BR4 and BR5 was a classic bipartite NLS. These NLS were essential for viral replication. VP2, the major capsid protein, lacked these NLS and contained no region with more than two basic amino acids in proximity. However, three regions of basic clusters were identified in the folded protein, assembled into a trimeric structure. Mutagenesis experiments showed that only one of these three regions was involved in VP2 transport to the nucleus. This structural NLS, termed the nuclear localization motif (NLM), is located inside the assembled capsid and thus can be used to transport trimers to the nucleus in late steps of infection but not for virions in initial infection steps. The two NLS of VP1up are located in the N-terminal part of the protein, externalized from the capsid during endosomal transit, exposing them for nuclear targeting during early steps of infection. Globally, the determinants of nuclear transport of structural proteins of PPV were different from those of closely related parvoviruses. IMPORTANCE Most DNA viruses use the nucleus for their replication cycle. Thus, structural proteins need to be targeted to this cellular compartment at two distinct steps of the infection: in early steps to deliver viral genomes to the nucleus and in late steps to assemble new viruses. Nuclear targeting of proteins depends on the recognition of a stretch of basic amino acids by cellular transport proteins. This study reports the identification of two classic nuclear localization signals in the minor capsid protein (VP1) of porcine parvovirus. The major protein (VP2) nuclear localization was shown to depend on a complex structural motif. This motif can be used as a strategy by the virus to avoid transport of incorrectly folded proteins and to selectively import assembled trimers into the nucleus. Structural nuclear localization motifs can also be important for nuclear proteins without a classic basic amino acid stretch, including multimeric cellular proteins. PMID:25078698
Biedermann, Benjamin R.; Wieser, Wolfgang; Eigenwillig, Christoph M.; Palte, Gesa; Adler, Desmond C.; Srinivasan, Vivek J.; Fujimoto, James G.; Huber, Robert
2009-01-01
We demonstrate en face swept source optical coherence tomography (ss-OCT) without requiring a Fourier transformation step. The electronic optical coherence tomography (OCT) interference signal from a k-space linear Fourier domain mode-locked laser is mixed with an adjustable local oscillator, yielding the analytic reflectance signal from one image depth for each frequency sweep of the laser. Furthermore, a method for arbitrarily shaping the spectral intensity profile of the laser is presented, without requiring the step of numerical apodization. In combination, these two techniques enable sampling of the in-phase and quadrature signal with a slow analog-to-digital converter and allow for real-time display of en face projections even for highest axial scan rates. Image data generated with this technique is compared to en face images extracted from a three-dimensional OCT data set. This technique can allow for real-time visualization of arbitrarily oriented en face planes for the purpose of alignment, registration, or operator-guided survey scans while simultaneously maintaining the full capability of high-speed volumetric ss-OCT functionality. PMID:18978919
Smartphone-Based Real-Time Indoor Location Tracking With 1-m Precision.
Liang, Po-Chou; Krause, Paul
2016-05-01
Monitoring the activities of daily living of the elderly at home is widely recognized as useful for the detection of new or deteriorating health conditions. However, the accuracy of existing indoor location tracking systems remains unsatisfactory. The aim of this study was, therefore, to develop a localization system that can identify a patient's real-time location in a home environment with maximum estimation error of 2 m at a 95% confidence level. A proof-of-concept system based on a sensor fusion approach was built with considerations for lower cost, reduced intrusiveness, and higher mobility, deployability, and portability. This involved the development of both a step detector using the accelerometer and compass of an iPhone 5, and a radio-based localization subsystem using a Kalman filter and received signal strength indication to tackle issues that had been identified as limiting accuracy. The results of our experiments were promising with an average estimation error of 0.47 m. We are confident that with the proposed future work, our design can be adapted to a home-like environment with a more robust localization solution.
A simple model for the dependence on local detonation speed of the product entropy
NASA Astrophysics Data System (ADS)
Hetherington, David C.; Whitworth, Nicholas J.
2012-03-01
The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of singlespeed programmed burn to DSD/WBL (Detonation Shock Dynamics / Whitham Bdzil Lambourn). The problem with this advance is that the previously conventional approach to the hydrodynamic stage of the model results in the entropy of the detonation products (s) having the wrong correlation with detonation speed (D). Instead of being higher where D is lower, the conventional method leads to s being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and s is realistically correlated with D.
A Simple Model for the Dependence on Local Detonation Speed (D) of the Product Entropy (S)
NASA Astrophysics Data System (ADS)
Hetherington, David
2011-06-01
The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of single-speed programmed burn to DSD. However, with this advance has come the problem that the previously conventional approach to the hydrodynamic stage of the model results in S having the wrong correlation with D. Instead of being higher where the detonation speed is lower, i.e. where reaction occurs at lower compression, the conventional method leads to S being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and S is realistically correlated with D.
Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S; Cutler, Barbara M; Shain, William; Roysam, Badrinath
2010-03-01
This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8 x speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1-1.6) voxels per mesh face for peak signal-to-noise ratios from (110-28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively.
A Statewide Quality Improvement Collaborative to Increase Breastfeeding Rates in Tennessee.
Ware, Julie L; Schetzina, Karen E; Morad, Anna; Barker, Brenda; Scott, Theresa A; Grubb, Peter H
2018-05-01
Tennessee has low breastfeeding rates and has identified opportunities for improvement to enhance maternity practices to support breastfeeding mothers. We sought a 10% relative increase in the aggregate Joint Commission measure of breastfeeding exclusivity at discharge (TJC PC-05) by focusing on high-reliability (≥90%) implementation of processes that promote breastfeeding in the delivery setting. A statewide, multidisciplinary development team reviewed evidence from the WHO-UNICEF "Ten Steps to Successful Breastfeeding" to create a consensus toolkit of process indicators aligned with the Ten Steps. Hospitals submitted monthly TJC PC-05 data for 6 months while studying local implementation of the Ten Steps to identify improvement opportunities, and for an additional 11 months while conducting tests of change to improve Ten Steps implementation using Plan-Do-Study-Act cycles, local process audits, and control charts. Data were aggregated at the state level and presented at 12 monthly webinars, 3 regional learning sessions, and 1 statewide meeting where teams shared their local data and implementation experiences. Thirteen hospitals accounting for 47% of live births in Tennessee submitted data on 31,183 mother-infant dyads from August 1, 2012, to December 31, 2013. Aggregate monthly mean PC-05 demonstrated "special cause" improvement increasing from 37.1% to 41.2%, an 11.1% relative increase. Five hospitals reported implementation of ≥5 of the Ten Steps and two hospitals reported ≥90% reliability on ≥5 of the Ten Steps using locally designed process audits. Using large-scale improvement methodology, a successful statewide collaborative led to >10% relative increase in breastfeeding exclusivity at discharge in participating Tennessee hospitals. Further opportunities for improvement in implementing breastfeeding supportive practices were identified.
The neglected nonlocal effects of deforestation
NASA Astrophysics Data System (ADS)
Winckler, Johannes; Reick, Christian; Pongratz, Julia
2017-04-01
Deforestation changes surface temperature locally via biogeophysical effects by changing the water, energy and momentum balance. Adding to these locally induced changes (local effects), deforestation at a given location can cause changes in temperature elsewhere (nonlocal effects). Most previous studies have not considered local and nonlocal effects separately, but investigated the total (local plus nonlocal) effects, for which global deforestation was found to cause a global mean cooling. Recent modeling and observational studies focused on the isolated local effects: The local effects are relevant for local living conditions, and they can be obtained from in-situ and satellite observations. Observational studies suggest that the local effects of potential deforestation cause a warming when averaged globally. This contrast between local warming and total cooling indicates that the nonlocal effects of deforestation are causing a cooling and thus counteract the local effects. It is still unclear how the nonlocal effects depend on the spatial scale of deforestation, and whether they still compensate the local warming in a more realistic spatial distribution of deforestation. To investigate this, we use a fully coupled climate model and separate local and nonlocal effects of deforestation in three steps: Starting from a forest world, we simulate deforestation in one out of four grid boxes using a regular spatial pattern and increase the number of deforestation grid boxes step-wise up to three out of four boxes in subsequent simulations. To compare these idealized spatial distributions of deforestation to a more realistic case, we separate local and nonlocal effects in a simulation where deforestation is applied in regions where it occurred historically. We find that the nonlocal effects scale nearly linearly with the number of deforested grid boxes, and the spatial distribution of the nonlocal effects is similar for the regular spatial distribution of deforestation and the more realistic pattern. Globally averaged, the deforestation-induced warming of the local effects is counteracted by the nonlocal effects, which are about three times as strong as the local effects (up to 0.1K local warming versus -0.3K nonlocal cooling). Thus, the nonlocal effects are more cooling than the local effects are warming, and this is valid not only for idealized simulations of large-scale deforestation, but also for a more realistic deforestation scenario. We conclude that the local effects of deforestation only yield an incomplete picture of the total climate effects by biogeophysical pathways. While the local effects capture the direct climatic response at the site of deforestation, the nonlocal effects have to be included if the biogeophysical effects of deforestation are considered for an implementation in climate policies.
Giesbertz, K J H
2015-08-07
A theorem for the invertibility of arbitrary response functions is presented under the following conditions: the time dependence of the potentials should be Laplace transformable and the initial state should be a ground state, though it might be degenerate. This theorem provides a rigorous foundation for all density-functional-like theories in the time-dependent linear response regime. Especially for time-dependent one-body reduced density matrix (1RDM) functional theory, this is an important step forward, since a solid foundation has currently been lacking. The theorem is equally valid for static response functions in the non-degenerate case, so can be used to characterize the uniqueness of the potential in the ground state version of the corresponding density-functional-like theory. Such a classification of the uniqueness of the non-local potential in ground state 1RDM functional theory has been lacking for decades. With the aid of presented invertibility theorem presented here, a complete classification of the non-uniqueness of the non-local potential in 1RDM functional theory can be given for the first time.
Numerical simulation of pseudoelastic shape memory alloys using the large time increment method
NASA Astrophysics Data System (ADS)
Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad
2017-04-01
The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.
Study on the syhthesis process of tetracaine hydrochloride
NASA Astrophysics Data System (ADS)
Li, Wenli; Zhao, Jie; Cui, Yujie
2017-05-01
Tetrachloride hydrochloride is a local anesthetic with long-acting ester, and it is usually present in the form of a hydrochloride salt. Firsleb first synthesized the tetracaine by experiment in 1928, which is one of the recognized clinical potent anesthetics. This medicine has the advantages of stable physical and chemical properties, the rapid role and long maintenance. Tetracaine is also used for ophthalmic surface anesthesia as one of the main local anesthetic just like conduction block anesthesia, mucosal surface anesthesia and epidural anesthesia. So far, the research mainly engaged in its clinical application research, and the research strength is relatively small in the field of synthetic technology. The general cost of the existing production process is high, and the yield is low. In addition, the reaction time is long and the reaction conditions are harsh. In this paper, a new synthetic method was proposed for the synthesis of tetracaine hydrochloride. The reaction route has the advantages of few steps, high yield, short reaction time and mild reaction conditions. The cheap p-nitrobenzoic acid was selected as raw material. By esterification with ethanol and reaction with n-butyraldehyde (the reaction process includes nitro reduction, aldol condensation and hydrogenation reduction), the intermediate was transesterified with dimethylaminoethanol under basic conditions. Finally, the PH value was adjusted in the ethanol solvent. After experiencing 4 steps reaction, the crude tetracaine hydrochloride was obtained.
NASA Astrophysics Data System (ADS)
Li, Wanjing; Schütze, Rainer; Böhler, Martin; Boochs, Frank; Marzani, Franck S.; Voisin, Yvon
2009-06-01
We present an approach to integrate a preprocessing step of the region of interest (ROI) localization into 3-D scanners (laser or stereoscopic). The definite objective is to make the 3-D scanner intelligent enough to localize rapidly in the scene, during the preprocessing phase, the regions with high surface curvature, so that precise scanning will be done only in these regions instead of in the whole scene. In this way, the scanning time can be largely reduced, and the results contain only pertinent data. To test its feasibility and efficiency, we simulated the preprocessing process under an active stereoscopic system composed of two cameras and a video projector. The ROI localization is done in an iterative way. First, the video projector projects a regular point pattern in the scene, and then the pattern is modified iteratively according to the local surface curvature of each reconstructed 3-D point. Finally, the last pattern is used to determine the ROI. Our experiments showed that with this approach, the system is capable to localize all types of objects, including small objects with small depth.
Strain mapping in TEM using precession electron diffraction
Taheri, Mitra Lenore; Leff, Asher Calvin
2017-02-14
A sample material is scanned with a transmission electron microscope (TEM) over multiple steps having a predetermined size at a predetermined angle. Each scan at a predetermined step and angle is compared to a template, wherein the template is generated from parameters of the material and the scanning. The data is then analyzed using local mis-orientation mapping and/or Nye's tensor analysis to provide information about local strain states.
Time-Extended Policies in Mult-Agent Reinforcement Learning
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Agogino, Adrian K.
2004-01-01
Reinforcement learning methods perform well in many domains where a single agent needs to take a sequence of actions to perform a task. These methods use sequences of single-time-step rewards to create a policy that tries to maximize a time-extended utility, which is a (possibly discounted) sum of these rewards. In this paper we build on our previous work showing how these methods can be extended to a multi-agent environment where each agent creates its own policy that works towards maximizing a time-extended global utility over all agents actions. We show improved methods for creating time-extended utilities for the agents that are both "aligned" with the global utility and "learnable." We then show how to crate single-time-step rewards while avoiding the pi fall of having rewards aligned with the global reward leading to utilities not aligned with the global utility. Finally, we apply these reward functions to the multi-agent Gridworld problem. We explicitly quantify a utility's learnability and alignment, and show that reinforcement learning agents using the prescribed reward functions successfully tradeoff learnability and alignment. As a result they outperform both global (e.g., team games ) and local (e.g., "perfectly learnable" ) reinforcement learning solutions by as much as an order of magnitude.
Redwing: A MOOSE application for coupling MPACT and BISON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frederick N. Gleicher; Michael Rose; Tom Downar
Fuel performance and whole core neutron transport programs are often used to analyze fuel behavior as it is depleted in a reactor. For fuel performance programs, internal models provide the local intra-pin power density, fast neutron flux, burnup, and fission rate density, which are needed for a fuel performance analysis. The fuel performance internal models have a number of limitations. These include effects on the intra-pin power distribution by nearby assembly elements, such as water channels and control rods, and the further limitation of applicability to a specified fuel type such as low enriched UO2. In addition, whole core neutronmore » transport codes need an accurate intra-pin temperature distribution in order to calculate neutron cross sections. Fuel performance simulations are able to model the intra-pin fuel displacement as the fuel expands and densifies. These displacements must be accurately modeled in order to capture the eventual mechanical contact of the fuel and the clad; the correct radial gap width is needed for an accurate calculation of the temperature distribution of the fuel rod. Redwing is a MOOSE-based application that enables coupling between MPACT and BISON for transport and fuel performance coupling. MPACT is a 3D neutron transport and reactor core simulator based on the method of characteristics (MOC). The development of MPACT began at the University of Michigan (UM) and now is under the joint development of ORNL and UM as part of the DOE CASL Simulation Hub. MPACT is able to model the effects of local assembly elements and is able calculate intra-pin quantities such as the local power density on a volumetric mesh for any fuel type. BISON is a fuel performance application of Multi-physics Object Oriented Simulation Environment (MOOSE), which is under development at Idaho National Laboratory. BISON is able to solve the nonlinearly coupled mechanical deformation and heat transfer finite element equations that model a fuel element as it is depleted in a nuclear reactor. Redwing couples BISON and MPACT in a single application. Redwing maps and transfers the individual intra-pin quantities such as fission rate density, power density, and fast neutron flux from the MPACT volumetric mesh to the individual BISON finite element meshes. For a two-way coupling Redwing maps and transfers the individual pin temperature field and axially dependent coolant densities from the BISON mesh to the MPACT volumetric mesh. Details of the mapping are given. Redwing advances the simulation with the MPACT solution for each depletion time step and then advances the multiple BISON simulations for fuel performance calculations. Sub-cycle advancement can be applied to the individual BISON simulations and allows multiple time steps to be applied to the fuel performance simulations. Currently, only loose coupling where data from a previous time step is applied to the current time step is performed.« less
Choi, Bernard Ck; Decou, Mary Lou; Rasali, Drona; Martens, Patricia J; Mancuso, Michelina; Plotnikoff, Ronald C; Neudorf, Cory; Thanos, Joanne; Svenson, Lawrence W; Denny, Keith; Orpana, Heather; Stewart, Paula; King, Michael; Griffith, Jane; Erickson, Tannis; van Dorp, Renate; White, Deanna; Ali, Amira
2014-01-22
National health surveys are sometimes used to provide estimates on risk factors for policy and program development at the regional/local level. However, as regional/local needs may differ from national ones, an important question is how to also enhance capacity for risk factor surveillance regionally/locally. A Think Tank Forum was convened in Canada to discuss the needs, characteristics, coordination, tools and next steps to build capacity for regional/local risk factor surveillance. A series of follow up activities to review the relevant issues pertaining to needs, characteristics and capacity of risk factor surveillance were conducted. Results confirmed the need for a regional/local risk factor surveillance system that is flexible, timely, of good quality, having a communication plan, and responsive to local needs. It is important to conduct an environmental scan and a gap analysis, to develop a common vision, to build central and local coordination and leadership, to build on existing tools and resources, and to use innovation. Findings of the Think Tank Forum are important for building surveillance capacity at the local/county level, both in Canada and globally. This paper provides a follow-up review of the findings based on progress over the last 4 years.
Qi, Jin; Yang, Zhiyong
2014-01-01
Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Technical Reports Server (NTRS)
Choi, K.-Y.; Dulikravich, G. S.
1993-01-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Astrophysics Data System (ADS)
Choi, K.-Y.; Dulikravich, G. S.
1993-11-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Forward-facing steps induced transition in a subsonic boundary layer
NASA Astrophysics Data System (ADS)
Zh, Hui; Fu, Song
2017-10-01
A forward-facing step (FFS) immersed in a subsonic boundary layer is studied through a high-order flux reconstruction (FR) method to highlight the flow transition induced by the step. The step height is a third of the local boundary-layer thickness. The Reynolds number based on the step height is 720. Inlet disturbances are introduced giving rise to streamwise vortices upstream of the step. It is observed that these small-scale streamwise structures interact with the step and hairpin vortices are quickly developed after the step leading to flow transition in the boundary layer.
Localized transfection on arrays of magnetic beads coated with PCR products.
Isalan, Mark; Santori, Maria Isabel; Gonzalez, Cayetano; Serrano, Luis
2005-02-01
High-throughput gene analysis would benefit from new approaches for delivering DNA or RNA into cells. Here we describe a simple system that allows any molecular biology laboratory to carry out multiple, parallel cell transfections on microscope coverslip arrays. By using magnetically defined positions and PCR product-coated paramagnetic beads, we achieved transfection in a variety of cell lines. Beads may be added to the cells at any time, allowing both spatial and temporal control of transfection. Because the beads may be coated with more than one gene construct, the method can be used to achieve cotransfection within single cells. Furthermore, PCR-generated mutants may be conveniently screened, bypassing cloning and plasmid purification steps. We illustrated the applicability of the method by screening combinatorial peptide libraries, fused to GFP, to identify previously unknown cellular localization motifs. In this way, we identified several localizing peptides, including structured localization signals based around the scaffold of a single C2H2 zinc finger.
Variable-mesh method of solving differential equations
NASA Technical Reports Server (NTRS)
Van Wyk, R.
1969-01-01
Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.
Kogi, Kazutaka
2006-01-01
Participatory programmes for occupational risk reduction are gaining importance particularly in small workplaces in both industrially developing and developed countries. To discuss the types of effective support, participatory steps commonly seen in our "work improvement-Asia" network are reviewed. The review covered training programmes for small enterprises, farmers, home workers and trade union members. Participatory steps commonly focusing on low-cost good practices locally achieved have led to concrete improvements in multiple technical areas including materials handling, workstation ergonomics, physical environment and work organization. These steps take advantage of positive features of small workplaces in two distinct ways. First, local key persons are ready to accept local good practices conveyed through personal, informal approaches. Second, workers and farmers are capable of understanding technical problems affecting routine work and taking flexible actions leading to solving them. This process is facilitated by the use of locally adjusted training tools such as local good examples, action checklists and group work methods. It is suggested that participatory occupational health programmes can work in small workplaces when they utilize low-cost good practices in a flexible manner. Networking of these positive experiences is essential.
NASA Astrophysics Data System (ADS)
Farstad, Jan Magnus Granheim; Netland, Øyvind; Welo, Torgeir
2017-10-01
This paper presents the results from a second series of experiments made to study local plastic deformations of a complex, hollow aluminium extrusion formed in roll bending. The first experimental series utilizing a single step roll bending sequence has been presented at the ESAFORM 2016 conference by Farstad et. al. In this recent experimental series, the same aluminium extrusion was formed in incremental steps. The objective was to investigate local distortions of the deformed cross section as a result of different number of steps employed to arrive at the final global shape of the extrusion. Moreover, the results between the two experimental series are compared, focusing on identifying differences in both the desired and the undesired deformations taking place as a result of bending and contact stresses. The profiles formed through multiple passes had less undesirable local distortions of the cross-section than the profiles that were formed in a single pass. However, the springback effect was more pronounced, meaning that the released radii of the profiles were higher.
Mechanism of Inhibition of the V-Type Molecular Motor by Tributyltin Chloride
Takeda, Mizuho; Suno-Ikeda, Chiyo; Shimabukuro, Katsuya; Yoshida, Masasuke; Yokoyama, Ken
2009-01-01
Tributyltin chloride (TBT-Cl) is an endocrine disruptor found in many animal species, and it is also known to be an inhibitor for the V-ATPases that are emerging as potential targets in the treatment of diseases such as osteoporosis and cancer. We demonstrated by using biochemical and single-molecular imaging techniques that TBT-Cl arrests an elementary step for rotary catalysis of the V1 motor domain. In the presence of TBT-Cl, the consecutive rotation of V1 paused for a long duration (∼0.5 s), even at saturated ATP concentrations, and the pausing positions were localized at 120° intervals. Analysis of both the pausing time and moving time revealed that TBT-Cl has little effect on the binding affinity for ATP, but, rather, it arrests the catalytic event(s). This is the first report to demonstrate that an inhibitor arrests an elementary step for rotary catalysis of a V-type ATP-driven rotary motor. PMID:19186155
Detection and Correction of Step Discontinuities in Kepler Flux Time Series
NASA Technical Reports Server (NTRS)
Kolodziejczak, J. J.; Morris, R. L.
2011-01-01
PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].
Accurate identification of microseismic P- and S-phase arrivals using the multi-step AIC algorithm
NASA Astrophysics Data System (ADS)
Zhu, Mengbo; Wang, Liguan; Liu, Xiaoming; Zhao, Jiaxuan; Peng, Ping'an
2018-03-01
Identification of P- and S-phase arrivals is the primary work in microseismic monitoring. In this study, a new multi-step AIC algorithm is proposed. This algorithm consists of P- and S-phase arrival pickers (P-picker and S-picker). The P-picker contains three steps: in step 1, a preliminary P-phase arrival window is determined by the waveform peak. Then a preliminary P-pick is identified using the AIC algorithm. Finally, the P-phase arrival window is narrowed based on the above P-pick. Thus the P-phase arrival can be identified accurately by using the AIC algorithm again. The S-picker contains five steps: in step 1, a narrow S-phase arrival window is determined based on the P-pick and the AIC curve of amplitude biquadratic time-series. In step 2, the S-picker automatically judges whether the S-phase arrival is clear to identify. In step 3 and 4, the AIC extreme points are extracted, and the relationship between the local minimum and the S-phase arrival is researched. In step 5, the S-phase arrival is picked based on the maximum probability criterion. To evaluate of the proposed algorithm, a P- and S-picks classification criterion is also established based on a source location numerical simulation. The field data tests show a considerable improvement of the multi-step AIC algorithm in comparison with the manual picks and the original AIC algorithm. Furthermore, the technique is independent of the kind of SNR. Even in the poor-quality signal group which the SNRs are below 5, the effective picking rates (the corresponding location error is <15 m) of P- and S-phase arrivals are still up to 80.9% and 76.4% respectively.
Angular measurements of the dynein ring reveal a stepping mechanism dependent on a flexible stalk
Lippert, Lisa G.; Dadosh, Tali; Hadden, Jodi A.; Karnawat, Vishakha; Diroll, Benjamin T.; Murray, Christopher B.; Holzbaur, Erika L. F.; Schulten, Klaus; Reck-Peterson, Samara L.; Goldman, Yale E.
2017-01-01
The force-generating mechanism of dynein differs from the force-generating mechanisms of other cytoskeletal motors. To examine the structural dynamics of dynein’s stepping mechanism in real time, we used polarized total internal reflection fluorescence microscopy with nanometer accuracy localization to track the orientation and position of single motors. By measuring the polarized emission of individual quantum nanorods coupled to the dynein ring, we determined the angular position of the ring and found that it rotates relative to the microtubule (MT) while walking. Surprisingly, the observed rotations were small, averaging only 8.3°, and were only weakly correlated with steps. Measurements at two independent labeling positions on opposite sides of the ring showed similar small rotations. Our results are inconsistent with a classic power-stroke mechanism, and instead support a flexible stalk model in which interhead strain rotates the rings through bending and hinging of the stalk. Mechanical compliances of the stalk and hinge determined based on a 3.3-μs molecular dynamics simulation account for the degree of ring rotation observed experimentally. Together, these observations demonstrate that the stepping mechanism of dynein is fundamentally different from the stepping mechanisms of other well-studied MT motors, because it is characterized by constant small-scale fluctuations of a large but flexible structure fully consistent with the variable stepping pattern observed as dynein moves along the MT. PMID:28533393
Multi-Species Fluxes for the Parallel Quiet Direct Simulation (QDS) Method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Lim, C.-W.; Jermy, M. C.; Krumdieck, S. P.; Smith, M. R.; Lin, Y.-J.; Wu, J.-S.
2011-05-01
Fluxes of multiple species are implemented in the Quiet Direct Simulation (QDS) scheme for gas flows. Each molecular species streams independently. All species are brought to local equilibrium at the end of each time step. The multi species scheme is compared to DSMC simulation, on a test case of a Mach 20 flow of a xenon/helium mixture over a forward facing step. Depletion of the heavier species in the bow shock and the near-wall layer are seen. The multi-species QDS code is then used to model the flow in a pulsed-pressure chemical vapour deposition reactor set up for carbon film deposition. The injected gas is a mixture of methane and hydrogen. The temporal development of the spatial distribution of methane over the substrate is tracked.
Eleven essential steps to purchasing or selling a medical practice.
Barrett, William
2014-01-01
Based on our experience in representing more than 100 doctors and medical specialists in practice sales and acquisitions, we have identified 11 key considerations important to a deal. There are several issues to consider while going through the process of buying or selling a practice including the implementation of a "letter of intent" as a first step rather than drafting a contract, securing a lease, and verifying the property is not in violation of the local zoning requirements. There are also considerations with regard to the patients, which range from how will the accounts receivable at the time of the closing be handled to who is responsible for the handling of continued treatment in an ongoing case after a deal is finalized. This article details these considerations and more.
Xue, Nan; Khodaparast, Sepideh; Zhu, Lailai; Nunes, Janine K; Kim, Hyoungsoo; Stone, Howard A
2017-12-12
Inducing thermal gradients in fluid systems with initial, well-defined density gradients results in the formation of distinct layered patterns, such as those observed in the ocean due to double-diffusive convection. In contrast, layered composite fluids are sometimes observed in confined systems of rather chaotic initial states, for example, lattes formed by pouring espresso into a glass of warm milk. Here, we report controlled experiments injecting a fluid into a miscible phase and show that, above a critical injection velocity, layering emerges over a time scale of minutes. We identify critical conditions to produce the layering, and relate the results quantitatively to double-diffusive convection. Based on this understanding, we show how to employ this single-step process to produce layered structures in soft materials, where the local elastic properties vary step-wise along the length of the material.
NASA Astrophysics Data System (ADS)
Masaitis, A.
2014-12-01
Every year, all around the world, global environmental change affects the human habitat. This is effect enhanced by the mining operation, and creates new challenges in relationship between the mining and local community. The purpose of this project are developed the Stakeholders engagement evaluation plan which is currently developed in University of Nevada, Reno for the Emigrant mining project, located in the central Nevada, USA, and belong to the Newmont Mining Corporation, one of the gold production leader worldwide. The needs for this project is to create the open dialog between Newmont mining company and all interested parties which have social or environmental impacts from the Emigrant mine. Identification of the stakeholders list is first and one of the most difficult steps in the developing of mine social responsibility. Stakeholders' engagement evaluation plan must be based on the timing and available resources of the mining company, understanding the goals for the engagement, and on analyzes of the possible risks from engagement. In conclusion, the Stakeholders engagement evaluation plan includes: first, determinations of the stakeholders list, which must include any interested or effected by the mine projects groups, for example: state and local government representatives, people from local communities, business partners, environmental NGOs, indigenous people, and academic groups. The contacts and availability for communication is critical for Stakeholders engagement. Next, is to analyze characteristics of all these parties and determinate the level of interest and level of their influence on the project. The next step includes the Stakeholders matrix and mapping development, where all these information will be put together.After that, must be chosen the methods for stakeholders' engagement. The methods usually depends from the goals of engagement (create the dialog lines, collect the data, determinations of the local issues and concerns, or establish the negotiation process) and available resources as a time, people, budget. Is it very important here to recognize the possible risks from the engagement and establish the key massage for stakeholders. Finally, the engagement plan should be evaluated and can be implementing for the new social responsibility practice development.
NASA Astrophysics Data System (ADS)
Xi, Wen; Song, Xiaoqing; Hu, Shi; Chen, Zheng
2017-11-01
In this work, the phase field crystal (PFC) method is used to study the localized solid-state amorphization (SSA) and its dynamic transformation process in polycrystalline materials under the uniaxial tensile deformation with different factors. The impacts of these factors, including strain rates, temperatures and grain sizes, are analyzed. Kinetically, the ultra-high strain rate causes the lattice to be seriously distorted and the grain to gradually collapse, so the dislocation density rises remarkably. Therefore, localized SSA occurs. Thermodynamically, as high temperature increases the activation energy, the atoms are active and prefer to leave the original position, which induce atom rearrangement. Furthermore, small grain size increases the percentage of grain boundary and the interface free energy of the system. As a result, Helmholtz free energy increases. The dislocations and Helmholtz free energy act as the seed and driving force for the process of the localized SSA. Also, the critical diffusion-time step and the percentage of amorphous region areas are calculated. Through this work, the PFC method is proved to be an effective means to study localized SSA under uniaxial tensile deformation.
An Integrated Wireless Wearable Sensor System for Posture Recognition and Indoor Localization.
Huang, Jian; Yu, Xiaoqiang; Wang, Yuan; Xiao, Xiling
2016-10-31
In order to provide better monitoring for the elderly or patients, we developed an integrated wireless wearable sensor system that can realize posture recognition and indoor localization in real time. Five designed sensor nodes which are respectively fixed on lower limbs and a standard Kalman filter are used to acquire basic attitude data. After the attitude angles of five body segments (two thighs, two shanks and the waist) are obtained, the pitch angles of the left thigh and waist are used to realize posture recognition. Based on all these attitude angles of body segments, we can also calculate the coordinates of six lower limb joints (two hip joints, two knee joints and two ankle joints). Then, a novel relative localization algorithm based on step length is proposed to realize the indoor localization of the user. Several sparsely distributed active Radio Frequency Identification (RFID) tags are used to correct the accumulative error in the relative localization algorithm and a set-membership filter is applied to realize the data fusion. The experimental results verify the effectiveness of the proposed algorithms.
Xi, Wen; Song, Xiaoqing; Hu, Shi; Chen, Zheng
2017-11-29
In this work, the phase field crystal (PFC) method is used to study the localized solid-state amorphization (SSA) and its dynamic transformation process in polycrystalline materials under the uniaxial tensile deformation with different factors. The impacts of these factors, including strain rates, temperatures and grain sizes, are analyzed. Kinetically, the ultra-high strain rate causes the lattice to be seriously distorted and the grain to gradually collapse, so the dislocation density rises remarkably. Therefore, localized SSA occurs. Thermodynamically, as high temperature increases the activation energy, the atoms are active and prefer to leave the original position, which induce atom rearrangement. Furthermore, small grain size increases the percentage of grain boundary and the interface free energy of the system. As a result, Helmholtz free energy increases. The dislocations and Helmholtz free energy act as the seed and driving force for the process of the localized SSA. Also, the critical diffusion-time step and the percentage of amorphous region areas are calculated. Through this work, the PFC method is proved to be an effective means to study localized SSA under uniaxial tensile deformation.
An Integrated Wireless Wearable Sensor System for Posture Recognition and Indoor Localization
Huang, Jian; Yu, Xiaoqiang; Wang, Yuan; Xiao, Xiling
2016-01-01
In order to provide better monitoring for the elderly or patients, we developed an integrated wireless wearable sensor system that can realize posture recognition and indoor localization in real time. Five designed sensor nodes which are respectively fixed on lower limbs and a standard Kalman filter are used to acquire basic attitude data. After the attitude angles of five body segments (two thighs, two shanks and the waist) are obtained, the pitch angles of the left thigh and waist are used to realize posture recognition. Based on all these attitude angles of body segments, we can also calculate the coordinates of six lower limb joints (two hip joints, two knee joints and two ankle joints). Then, a novel relative localization algorithm based on step length is proposed to realize the indoor localization of the user. Several sparsely distributed active Radio Frequency Identification (RFID) tags are used to correct the accumulative error in the relative localization algorithm and a set-membership filter is applied to realize the data fusion. The experimental results verify the effectiveness of the proposed algorithms. PMID:27809230
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muha, Villo; Zagyva, Imre; Venkei, Zsolt
2009-04-03
Two dUTPase isoforms (23 kDa and 21 kDa) are present in the fruitfly with the sole difference of an N-terminal extension. In Drosophila embryo, both isoforms are detected inside the nucleus. Here, we investigated the function of the N-terminal segment using eYFP-dUTPase constructs. In Schneider 2 cells, only the 23 kDa construct showed nuclear localization arguing that it may contain a nuclear localization signal (NLS). Sequence comparisons identified a lysine-rich nonapeptide with similarity to the human c-myc NLS. In Drosophila embryos during nuclear cleavages, the 23 kDa isoform showed the expected localization shifts. Contrariwise, although the 21 kDa isoform wasmore » excluded from the nuclei during interphase, it was shifted to the nucleus during prophase and forthcoming mitotic steps. The observed dynamic localization character showed strict timing to the nuclear cleavage phases and explained how both isoforms can be present within the nuclear microenvironment, although at different stages of cell cycle.« less
Erdas, Enrico; Pisano, Giuseppe; Pomata, Mariano; Pinna, Giovanni; Secci, Lucia; Licheri, Sergio; Daniele, Giovanni Maria
2006-01-01
The purpose of this report was to compare two different procedures in the treatment of idiopathic hydrocele, namely, hydrocelectomy and percutaneous sclerotherapy, both of which performed in the outpatient or day surgery setting. A detailed description of the technical local anaesthesia steps is reported together with the sclerotherapy method. The study was conducted in 71 patients with a total of 77 hydroceles treated from 1993 to 2004. Surgery was carried out in 53 cases and sclerotherapy in 24. The latter was more frequently opted for elderly subjects as well as in those patients who requested it. Local or locoregional anaesthesia was reserved to patients treated surgically. The two treatments were compared on the basis of the following parameters: age, operative time, length of hospital stay, success rate and complications. The efficacy of the two procedures was comparable (sclerotherapy 95.8% vs surgery 100%), but sclerotherapy proved more favourable in terms of simplicity, rapidity of execution, shortness of hospital stay and risk of complications. However, 41.7% of patients required more than one treatment to obtain a radical cure, whereas surgery was effective in all cases in just one step. Hospital stay and morbidity were almost the same when surgery was performed under local anaesthesia. Sclerotherapy is an efficient alternative to the classic hydrocelectomy. The choice between the two treatment modalities should be made, taking into account above all the patient's individual preference.
A Protocol for Real-time 3D Single Particle Tracking.
Hou, Shangguo; Welsher, Kevin
2018-01-03
Real-time three-dimensional single particle tracking (RT-3D-SPT) has the potential to shed light on fast, 3D processes in cellular systems. Although various RT-3D-SPT methods have been put forward in recent years, tracking high speed 3D diffusing particles at low photon count rates remains a challenge. Moreover, RT-3D-SPT setups are generally complex and difficult to implement, limiting their widespread application to biological problems. This protocol presents a RT-3D-SPT system named 3D Dynamic Photon Localization Tracking (3D-DyPLoT), which can track particles with high diffusive speed (up to 20 µm 2 /s) at low photon count rates (down to 10 kHz). 3D-DyPLoT employs a 2D electro-optic deflector (2D-EOD) and a tunable acoustic gradient (TAG) lens to drive a single focused laser spot dynamically in 3D. Combined with an optimized position estimation algorithm, 3D-DyPLoT can lock onto single particles with high tracking speed and high localization precision. Owing to the single excitation and single detection path layout, 3D-DyPLoT is robust and easy to set up. This protocol discusses how to build 3D-DyPLoT step by step. First, the optical layout is described. Next, the system is calibrated and optimized by raster scanning a 190 nm fluorescent bead with the piezoelectric nanopositioner. Finally, to demonstrate real-time 3D tracking ability, 110 nm fluorescent beads are tracked in water.
An improved local radial point interpolation method for transient heat conduction analysis
NASA Astrophysics Data System (ADS)
Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang
2013-06-01
The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.
Biomass district heating methodology and pilot installations for public buildings groups
NASA Astrophysics Data System (ADS)
Chatzistougianni, N.; Giagozoglou, E.; Sentzas, K.; Karastergios, E.; Tsiamitros, D.; Stimoniaris, D.; Stomoniaris, A.; Maropoulos, S.
2016-11-01
The objective of the paper is to show how locally available biomass can support a small-scale district heating system of public buildings, especially when taking into account energy audit in-situ measurements and energy efficiency improvement measures. The step-by-step methodology is presented, including the research for local biomass availability, the thermal needs study and the study for the biomass district heating system, with and without energy efficiency improvement measures.
2013-01-01
Background Malnutrition, with accompanying weight loss, is an unnecessary risk in hospitalised persons and often remains poorly recognised and managed. The study aims to evaluate a hospital-wide multifaceted intervention co-facilitated by clinical nurses and dietitians addressing the nutritional care of patients, particularly those at risk of malnutrition. Using the best available evidence on reducing and preventing unplanned weight loss, the intervention (introducing universal nutritional screening; the provision of oral nutritional supplements; and providing red trays and additional support for patients in need of feeding) will be introduced by local ward teams in a phased way in a large tertiary acute care hospital. Methods/Design A pragmatic stepped wedge randomised cluster trial with repeated cross section design will be conducted. The unit of randomisation is the ward, with allocation by a random numbers table. Four groups of wards (n = 6 for three groups, n = 7 for one group) will be randomly allocated to each intervention time point over the trial. Two trained local facilitators (a nurse and dietitian for each group) will introduce the intervention. The primary outcome measure is change in patient’s body weight, secondary patient outcomes are: length of stay, all-cause mortality, discharge destinations, readmission rates and ED presentations. Patient outcomes will be measured on one ward per group, with 20 patients measured per ward per time period by an unblinded researcher. Including baseline, measurements will be conducted at five time periods. Staff perspectives on the context of care will be measured with the Alberta Context Tool. Discussion Unplanned and unwanted weight loss in hospital is common. Despite the evidence and growing concern about hospital nutrition there are very few evaluations of system-wide nutritional implementation programs. This project will test the implementation of a nutritional intervention across one hospital system using a staged approach, which will allow sequential rolling out of facilitation and project support. This project is one of the first evidence implementation projects to use the stepped wedge design in acute care and we will therefore be testing the appropriateness of the stepped wedge design to evaluate such interventions. Trial registration ACTRN12611000020987 PMID:23924302
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Random Walks on Cartesian Products of Certain Nonamenable Groups and Integer Lattices
NASA Astrophysics Data System (ADS)
Vishnepolsky, Rachel
A random walk on a discrete group satisfies a local limit theorem with power law exponent \\alpha if the return probabilities follow the asymptotic law. P{ return to starting point after n steps } ˜ Crhonn-alpha.. A group has a universal local limit theorem if all random walks on the group with finitely supported step distributions obey a local limit theorem with the same power law exponent. Given two groups that obey universal local limit theorems, it is not known whether their cartesian product also has a universal local limit theorem. We settle the question affirmatively in one case, by considering a random walk on the cartesian product of a nonamenable group whose Cayley graph is a tree, and the integer lattice. As corollaries, we derive large deviations estimates and a central limit theorem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pucar, Darko; Hricak, Hedvig; Shukla-Dave, Amita
2007-09-01
Purpose: To determine whether prostate cancer local recurrence after radiation therapy (RT) occurs at the site of primary tumor by retrospectively comparing the tumor location on pre-RT and post-RT magnetic resonance imaging (MRI) and using step-section pathology after salvage radical prostatectomy (SRP) as the reference standard. Methods and Materials: Nine patients with localized prostate cancer were treated with intensity modulated RT (69-86.4 Gy), and had pre-RT and post-RT prostate MRI, biopsy-proven local recurrence, and SRP. The location and volume of lesions on pre-RT and post-RT MRI were correlated with step-section pathology findings. Tumor foci >0.2 cm{sup 3} and/or resulting inmore » extraprostatic disease on pathology were considered clinically significant. Results: All nine significant tumor foci (one in each patient; volume range, 0.22-8.63 cm{sup 3}) were detected both on pre-RT and post-RT MRI and displayed strikingly similar appearances on pre-RT and post-RT MRI and step-section pathology. Two clinically insignificant tumor foci ({<=}0.06 cm{sup 3}) were not detected on imaging. The ratios between tumor volumes on pathology and on post-RT MRI ranged from 0.52 to 2.80. Conclusions: Our study provides a direct visual confirmation that clinically significant post-RT local recurrence occurs at the site of primary tumor. Our results are in agreement with reported clinical and pathologic results and support the current practice of boosting the radiation dose within the primary tumor using imaging guidance. They also suggest that monitoring of primary tumor with pre-RT and post-RT MRI could lead to early detection of local recurrence amenable to salvage treatment.« less
BIRCH: a user-oriented, locally-customizable, bioinformatics system.
Fristensky, Brian
2007-02-09
Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
BIRCH: A user-oriented, locally-customizable, bioinformatics system
Fristensky, Brian
2007-01-01
Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351
Curved-line search algorithm for ab initio atomic structure relaxation
NASA Astrophysics Data System (ADS)
Chen, Zhanghui; Li, Jingbo; Li, Shushen; Wang, Lin-Wang
2017-09-01
Ab initio atomic relaxations often take large numbers of steps and long times to converge, especially when the initial atomic configurations are far from the local minimum or there are curved and narrow valleys in the multidimensional potentials. An atomic relaxation method based on on-the-flight force learning and a corresponding curved-line search algorithm is presented to accelerate this process. Results demonstrate the superior performance of this method for metal and magnetic clusters when compared with the conventional conjugate-gradient method.
GenCade Lateral Boundary Conditions
2017-01-01
13 Figure 4. Example pocket beach at Boston Bay, Portland Parish, Jamaica. Beach sediments are mainly locally derived...specify the sediment transport onto and off of each end of the grid. To determine the shoreline position of cell i at time-step “j+1,” (Figure 1...calculating the sediment transport across Cell Walls “i” and “i+1” [denoted as “Q(i,j)” and “Q(i+1,j)”] (plus adding in the user-defined sources and
Effect of hydrogen on cathodic corrosion of titanium aluminide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, K.W.; Jin, J.W.; Qiao, L.J.
1996-01-01
Cathodic corrosion of titanium aluminide (TiAl) during hydrogen charging in various acidic aqueous solutions and in molten salt at 160 C was studied. At constant potential, the rate of cathodic corrosion (V) was much higher than during anodic dissolution, and V increased linearly with increasing current. V was 10 times higher in the acid solution than in the salt solution under the same current. Disruption of the surface film by local hydride formation during cathodic polarization was shown to be the key step.
NASA Astrophysics Data System (ADS)
Shi, Y.; Gosselink, D.; Gharavi, K.; Baugh, J.; Wasilewski, Z. R.
2017-11-01
The optimization of metamorphic buffers for InSb/AlInSb QWs grown on GaAs (0 0 1) substrates is presented. With increasing surface offcut angle towards [ 1 1 bar 0 ] direction, the interaction of spiral growth around threading dislocations (TDs) with the offcut-induced atomic steps leads to a gradual change in the morphology of the AlSb buffer from one dominated by hillocks to that exhibiting near-parallel steps, and finally to a surface with increasing number of localized depressions. With the growth conditions used, the smoothest AlSb surface morphology was obtained for the offcut angles range of 0.8-1.3°. On substrates with 0° offcut, subsequent 3 repeats of Al0.24In0.76 Sb/Al0.12In0.88 Sb interlayers reduces the TD density of AlSb buffer by a factor of 10, while 70 times reduction in the surface density of TD-related hillocks is observed. The remaining hillocks have rectangular footprint and small facet angles with respect to GaAs (0 0 1) surface: 0.4° towards [ 1 1 bar 0 ] direction and 0.7° towards [1 1 0] direction. Their triangular-shaped sidewalls with regularly spaced atomic steps show occasional extra step insertion sites, characteristic of TD outcrops. Many of the observed sidewalls are dislocation free and offer atomically smooth areas of up to 1 μm2, already suitable for high-quality InSb growth and subsequent top-down fabrication of InSb nanowires. It is proposed that the sidewalls of the remaining hillocks offer local vicinal surfaces with atomic step density optimal for suppression of TD-induced spiral growth, thus providing the important information on the exact substrate offcut needed to achieve large hillock-free and atomically smooth areas on AlInSb metamorphic buffers.
Individual-based modelling of population growth and diffusion in discrete time.
Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone
2017-01-01
Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.
Marchevsky, M.; Ambrosio, G.; Lamm, M.; ...
2016-02-12
Acoustic emission (AE) detection is a noninvasive technique allowing the localization of the mechanical events and quenches in superconducting magnets. Application of the AE technique is especially advantageous in situations where magnet integrity can be jeopardized by the use of voltage taps or inductive pickup coils. As the prototype module of the transport solenoid (TS) for the Mu2e experiment at Fermilab represents such a special case, we have developed a dedicated six-channel AE detection system and accompanying software aimed at localizing mechanical events during the coil cold testing. The AE sensors based on transversely polarized piezoceramic washers combined with cryogenicmore » preamplifiers were mounted at the outer surface of the solenoid aluminum shell, with a 60° angular step around the circumference. Acoustic signals were simultaneously acquired at a rate of 500 kS/s, prefiltered and sorted based on their arrival time. Next, based on the arrival timing, angular and axial coordinates of the AE sources within the magnet structure were calculated. Furthermore, we present AE measurement results obtained during cooldown, spot heater firing, and spontaneous quenching of the Mu2e TS module prototype and discuss their relevance for mechanical stability assessment and quench localization.« less
A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.
Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying
2018-06-13
The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.
Numazawa, Satoshi; Smith, Roger
2011-10-01
Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.
Murakami, Yoshihiko; Yokoyama, Masayuki; Nishida, Hiroshi; Tomizawa, Yasuko; Kurosawa, Hiromi
2008-09-01
Several hemostat hydrogels are clinically used, and some other agents are studied for safer, more facile, and more efficient hemostasis. In the present paper, we proposed a novel method to evaluate local hemostat hydrogel on tissue surface. The procedure consisted of the following steps: (step 1) a mouse was fixed on a cork board, and its abdomen was incised; (step 2) serous fluid was carefully removed because it affected the estimation of the weight gained by the filter paper, and parafilm and preweighted filter paper were placed beneath the liver (parafilm prevented the filter paper's absorption of gradually oozing serous fluid); (step 3) the cork board was tilted and maintained at an angle of about 45 degrees so that the bleeding would more easily flow from the liver toward the filter paper; and (step 4) the bleeding lasted for 3 min. In this step, a hemostat was applied to the liver wound immediately after the liver was pricked with a needle. We found that (1) a careful removal of serous fluid prior to a bleeding and (2) a quantitative determination of the amount of excess aqueous solution that oozed out from a hemostat were important to a rigorous evaluation of hemostat efficacy. We successfully evaluated the efficacy of a fibrin-based hemostat hydrogel by using our method. The method proposed in the present study enabled the quantitative, accurate, and easy evaluation of the efficacy of local hemostatic hydrogel which acts as tissue-adhesive agent on biointerfaces.
Huang, Changming; Lin, Mi
2018-02-25
According to Japanese gastric cancer treatment guidelines, the standard operation for locally advanced upper third gastric cancer is the total gastrectomy with D2 lymphadenectomy, which includes the dissection of the splenic hilar lymph nodes. With the development of minimally invasive ideas and surgical techniques, laparoscopic spleen-preserving splenic hilar lymph node dissection is gradually accepted. It needs high technical requirements and should be carried out by surgeons with rich experience of open operation and skilled laparoscopic techniques. Based on being familiar with the anatomy of splenic hilum, we should choose a reasonable surgical approach and standardized operating procedure. A favorable left-sided approach is used to perform the laparoscopic spleen-preserving splenic hilar lymph node dissection in Department of Gastric Surgery, Fujian Medical University Union Hospital. This means that the membrane of the pancreas is separated at the superior border of the pancreatic tail in order to reach the posterior pancreatic space, revealing the end of the splenic vessels' trunk. The short gastric vessels are severed at their roots. This enables complete removal of the splenic hilar lymph nodes and stomach. At the same time, based on the rich clinical practice of laparoscopic gastric cancer surgery, we have summarized an effective operating procedure called Huang's three-step maneuver. The first step is the dissection of the lymph nodes in the inferior pole region of the spleen. The second step is the dissection of the lymph nodes in the trunk of splenic artery region. The third step is the dissection of the lymph nodes in the superior pole region of the spleen. It simplifies the procedure, reduces the difficulty of the operation, improves the efficiency of the operation, and ensures the safety of the operation. To further explore the safety of laparoscopic spleen-preserving splenic hilar lymph node dissection for locally advanced upper third gastric cancer, in 2016, we launched a multicenter phase II( trial of safety and feasibility of laparoscopic spleen-preserving No.10 lymph node dissection for locally advanced upper third gastric cancer (CLASS-04). Through the multicenter prospective study, we try to provide scientific theoretical basis and clinical experience for the promotion and application of the operation, and also to standardize and popularize the laparoscopic spleen-preserving splenic hilar lymph node dissection to promote its development. At present, the enrollment of the study has been completed, and the preliminary results also suggested that laparoscopic spleen-preserving No.10 lymph node dissection for locally advanced upper third gastric cancer was safe and feasible. We believe that with the improvement of standardized operation training system, the progress of laparoscopic technology and the promotion of Huang's three-step maneuver, laparoscopic spleen-preserving splenic hilar lymph node dissection will also become one of the standard treatments for locally advanced upper third gastric cancer.
NASA Technical Reports Server (NTRS)
Jentink, Thomas Neil; Usab, William J., Jr.
1990-01-01
An explicit, Multigrid algorithm was written to solve the Euler and Navier-Stokes equations with special consideration given to the coarse mesh boundary conditions. These are formulated in a manner consistent with the interior solution, utilizing forcing terms to prevent coarse-mesh truncation error from affecting the fine-mesh solution. A 4-Stage Hybrid Runge-Kutta Scheme is used to advance the solution in time, and Multigrid convergence is further enhanced by using local time-stepping and implicit residual smoothing. Details of the algorithm are presented along with a description of Jameson's standard Multigrid method and a new approach to formulating the Multigrid equations.
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.
Clemson, C M; Chow, J C; Brown, C J; Lawrence, J B
1998-07-13
These studies address whether XIST RNA is properly localized to the X chromosome in somatic cells where human XIST expression is reactivated, but fails to result in X inactivation (Tinker, A.V., and C.J. Brown. 1998. Nucl. Acids Res. 26:2935-2940). Despite a nuclear RNA accumulation of normal abundance and stability, XIST RNA does not localize in reactivants or in naturally inactive human X chromosomes in mouse/ human hybrid cells. The XIST transcripts are fully stabilized despite their inability to localize, and hence XIST RNA localization can be uncoupled from stabilization, indicating that these are separate steps controlled by distinct mechanisms. Mouse Xist RNA tightly localized to an active X chromosome, demonstrating for the first time that the active X chromosome in somatic cells is competent to associate with Xist RNA. These results imply that species-specific factors, present even in mature, somatic cells that do not normally express Xist, are necessary for localization. When Xist RNA is properly localized to an active mouse X chromosome, X inactivation does not result. Therefore, there is not a strict correlation between Xist localization and chromatin inactivation. Moreover, expression, stabilization, and localization of Xist RNA are not sufficient for X inactivation. We hypothesize that chromosomal association of XIST RNA may initiate subsequent developmental events required to enact transcriptional silencing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandewouw, Marlee M., E-mail: marleev@mie.utoronto
Purpose: Continuous dose delivery in radiation therapy treatments has been shown to decrease total treatment time while improving the dose conformity and distribution homogeneity over the conventional step-and-shoot approach. The authors develop an inverse treatment planning method for Gamma Knife® Perfexion™ that continuously delivers dose along a path in the target. Methods: The authors’ method is comprised of two steps: find a path within the target, then solve a mixed integer optimization model to find the optimal collimator configurations and durations along the selected path. Robotic path-finding techniques, specifically, simultaneous localization and mapping (SLAM) using an extended Kalman filter, aremore » used to obtain a path that travels sufficiently close to selected isocentre locations. SLAM is novelly extended to explore a 3D, discrete environment, which is the target discretized into voxels. Further novel extensions are incorporated into the steering mechanism to account for target geometry. Results: The SLAM method was tested on seven clinical cases and compared to clinical, Hamiltonian path continuous delivery, and inverse step-and-shoot treatment plans. The SLAM approach improved dose metrics compared to the clinical plans and Hamiltonian path continuous delivery plans. Beam-on times improved over clinical plans, and had mixed performance compared to Hamiltonian path continuous plans. The SLAM method is also shown to be robust to path selection inaccuracies, isocentre selection, and dose distribution. Conclusions: The SLAM method for continuous delivery provides decreased total treatment time and increased treatment quality compared to both clinical and inverse step-and-shoot plans, and outperforms existing path methods in treatment quality. It also accounts for uncertainty in treatment planning by accommodating inaccuracies.« less
NASA Astrophysics Data System (ADS)
Richoz, Sylvain; Krystyn, Leopold; Algeo, Thomas; Bhargava, Om
2014-05-01
The understanding of extreme environmental changes as major extinction events, perturbations of global biogeochemical cycles or rapid climate shifts is based on a precise timing of the different events. But especially in such moving environments exact correlations are difficult to establish what underlines the necessity of an integrated stratigraphy by using all tools at disposition. A Lower Triassic section at Mud in the Spiti Valley (Western Himalaya, India) is a candidate section for the GSSP of the Induan-Olenekian Boundary (IOB). The succession was deposited in a deep-shelf setting on the southern margin of the Neotethys Ocean. The section contains abundant fossils allowing a very precise regional biostratigraphy and displays no signs of sedimentary breaks. Analysis of pelagic faunas proves a significant, two-step radiation phase in ammonoids and conodonts close to the Induan-Olenekian boundary. These diversifications are coupled with a short-termed positive δ13Ccarb excursion of global evidence. The Spiti δ13Ccarb excursion displays, however, different amplitude and biostratigraphic position than in other relevant sections for this time interval. In this study, we analyzed δ13Ccarb, δ13Corg, and δ15Norg as well as major, trace, and REE concentrations for a 16-m-thick interval spanning the mid-Griesbachian to early Spathian substages, to better constrains the chain of events. Prior to the first radiation step, high difference gradient between the δ13Ccarb values of tempestite beds with shallow carbonate and carbonate originated in deeper water is interpreted as a sign of a stratified water column. This effect disappears with the onset of better oxygenated conditions at the time of the ammonoid-conodont radiation, which correspond as well to δ13Ccarb, δ13Corg and δ15Norg positive excursions. A decrease in Mo and U concentrations occurring at the same point suggests a shift toward locally less reducing conditions. The second step coincided with the change from terrigenous to almost pure carbonate sedimentation. This new set of data demonstrates from on hand the rapidity of radiation of the pelagic fauna in the aftermath of the Permian-Triassic extinction as soon as environmental conditions were favourable again. On the other hand, it demonstrates that bathymetry, for example, but also other local factors, could have had a significant impact in the timing of these radiations and may hamper solid worldwide correlations.
The local work function: Concept and implications
NASA Astrophysics Data System (ADS)
Wandelt, K.
1997-02-01
The term 'local work function' is now widely applied. The present work discusses the common physical basis of 'photoemission of adsorbed xenon (PAX)' and 'two-photon photonemissionspectroscopy of image potential states' as local work function probes. New examples with bimetallic and defective surfaces are presented which demonstrate the capability of PAX measurements for the characterization of heterogeneous surfaces on an atomic scale. Finally, implications of the existence of short-range variations of the surface potential at surface steps are addressed. In particular, dynamical work function change measurements are a sensitive probe for the step-density at surfaces and, as such, a powerful in-situ method to monitor film growth.
Improving HIV outcomes in resource-limited countries: the importance of quality indicators.
Ahonkhai, Aima A; Bassett, Ingrid V; Ferris, Timothy G; Freedberg, Kenneth A
2012-11-24
Resource-limited countries increasingly depend on quality indicators to improve outcomes within HIV treatment programs, but indicators of program performance suitable for use at the local program level remain underdeveloped. Using the existing literature as a guide, we applied standard quality improvement (QI) concepts to the continuum of HIV care from HIV diagnosis, to enrollment and retention in care, and highlighted critical service delivery process steps to identify opportunities for performance indicator development. We then identified existing indicators to measure program performance, citing examples used by pivotal donor agencies, and assessed their feasibility for use in surveying local program performance. Clinical delivery steps without existing performance measures were identified as opportunities for measure development. Using National Quality Forum (NQF) criteria as a guide, we developed measurement concepts suitable for use at the local program level that address existing gaps in program performance assessment. This analysis of the HIV continuum of care identified seven critical process steps providing numerous opportunities for performance measurement. Analysis of care delivery process steps and the application of NQF criteria identified 24 new measure concepts that are potentially useful for improving operational performance in HIV care at the local level. An evidence-based set of program-level quality indicators is critical for the improvement of HIV care in resource-limited settings. These performance indicators should be utilized as treatment programs continue to grow.
Bornhütter, Tobias; Pohl, Judith; Fischer, Christian; Saltsman, Irena; Mahammed, Atif; Gross, Zeev; Röder, Beate
2016-04-13
Recent studies show the feasibility of photodynamic inactivation of green algae as a vital step towards an effective photodynamic suppression of biofilms by using functionalized surfaces. The investigation of the intrinsic mechanisms of photodynamic inactivation in green algae represents the next step in order to determine optimization parameters. The observation of singlet oxygen luminescence kinetics proved to be a very effective approach towards understanding mechanisms on a cellular level. In this study, the first two-dimensional measurement of singlet oxygen kinetics in phototrophic microorganisms on surfaces during photodynamic inactivation is presented. We established a system of reproducible algae samples on surfaces, incubated with two different cationic, antimicrobial potent photosensitizers. Fluorescence microscopy images indicate that one photosensitizer localizes inside the green algae while the other accumulates along the outer algae cell wall. A newly developed setup allows for the measurement of singlet oxygen luminescence on the green algae sample surfaces over several days. The kinetics of the singlet oxygen luminescence of both photosensitizers show different developments and a distinct change over time, corresponding with the differences in their localization as well as their photosensitization potential. While the complexity of the signal reveals a challenge for the future, this study incontrovertibly marks a crucial, inevitable step in the investigation of photodynamic inactivation of biofilms: it shows the feasibility of using the singlet oxygen luminescence kinetics to investigate photodynamic effects on surfaces and thus opens a field for numerous investigations.
Mei, Dan; Wen, Meng; Xu, Xuemei; Zhu, Yuzheng; Xing, Futang
2018-04-20
In atmospheric environment, the layout difference of urban buildings has a powerful influence on accelerating or inhibiting the dispersion of particle matters (PM). In industrial cities, buildings of variable heights can obstruct the diffusion of PM from industrial stacks. In this study, PM dispersed within building groups was simulated by Reynolds-averaged Navier-Stokes equations coupled Lagrangian approach. Four typical street building arrangements were used: (a) a low-rise building block with Height/base H/b = 1 (b = 20 m); (b) step-up building layout (H/b = 1, 2, 3, 4); (c) step-down building layout (H/b = 4, 3, 2, 1); (d) high-rise building block (H/b = 5). Profiles of stream functions and turbulence intensity were used to examine the effect of various building layouts on atmospheric airflow. Here, concepts of particle suspension fraction and concentration distribution were used to evaluate the effect of wind speed on fine particle transport. These parameters showed that step-up building layouts accelerated top airflow and diffused more particles into street canyons, likely having adverse effects on resident health. In renewal old industry areas, the step-down building arrangement which can hinder PM dispersion from high-level stacks should be constructed preferentially. High turbulent intensity results in formation of a strong vortex that hinders particles into the street canyons. It is found that an increase in wind speed enhanced particle transport and reduced local particle concentrations, however, it did not affect the relative location of high particle concentration zones, which are related to building height and layout. This study has demonstrated the height variation and layout of urban architecture affect the local concentration distribution of particulate matter (PM) in the atmosphere and for the first time that wind velocity has particular effects on PM transport in various building groups. The findings may have general implications in optimization the building layout based on particle transport characteristics during the renewal of industrial cities. For city planners, the results and conclusions are useful for improving the local air quality. The study method also can be used to calculate the explosion risk of industrial dust for people who live in industrial cities.
Reflections in computer modeling of rooms: Current approaches and possible extensions
NASA Astrophysics Data System (ADS)
Svensson, U. Peter
2005-09-01
Computer modeling of rooms is most commonly done by some calculation technique that is based on decomposing the sound field into separate reflection components. In a first step, a list of possible reflection paths is found and in a second step, an impulse response is constructed from the list of reflections. Alternatively, the list of reflections is used for generating a simpler echogram, the energy decay as function of time. A number of geometrical acoustics-based methods can handle specular reflections, diffuse reflections, edge diffraction, curved surfaces, and locally/non-locally reacting surfaces to various degrees. This presentation gives an overview of how reflections are handled in the image source method and variants of the ray-tracing methods, which are dominating today in commercial software, as well as in the radiosity method and edge diffraction methods. The use of the recently standardized scattering and diffusion coefficients of surfaces is discussed. Possibilities for combining edge diffraction, surface scattering, and impedance boundaries are demonstrated for an example surface. Finally, the number of reflection paths becomes prohibitively high when all such combinations are included as demonstrated for a simple concert hall model. [Work supported by the Acoustic Research Centre through NFR, Norway.
Dizon, Janine Margarita; Machingaidze, Shingai; Grimmer, Karen
2016-09-13
Developing new clinical practice guidelines (CPGs) can be time-consuming and expensive. A more efficient approach could be to adopt, adapt or contextualise recommendations from existing good quality CPGs so that the resultant guidance is tailored to the local context. The first steps are to search for international CPGs that have a similar purpose, end-users and patients to your situation. The second step is to critically appraise the methodological quality of the CPGs to ensure that your guidance is based on credible evidence. Then the decisions begin. Can you simply 'adopt' this (parent) clinical practice guidelines, and implement the recommendations in their entirety, without any changes, in your setting? If so, then no further work is required. However this situation is rare. What is more likely, is that even if recommendations from the parent clinical practice guidelines can be adopted, how they are implemented needs to address local issues. Thus you may need to 'contextualise' the guidance, by addressing implementation issues such as local workforce, training, health systems, equipment and/or access to services. Generally this means that additional information is required (Practice/Context Points) to support effective implementation of the clinical practice guidelines recommendations. In some cases, you may need to 'adapt' the guidance, where you will make changes to the recommendations so that care is relevant to your local environments. This may involve additional work to search for local research, or obtain local consensus, regarding how best to adapt recommendations. For example, adaptation might reflect substituting one drug for another (drugs have similar effects, but the alternative drug to the recommended one may be cheaper, more easily obtained or more culturally acceptable). There is lack of standardisation of clinical practice guidelines terminology, leading clinical practice guideline activities often being poorly conceptualised or reported. We provide an approach that would help improve efficiency and standardisation of clinical practice guidelines activities.
Australia's marine virtual laboratory
NASA Astrophysics Data System (ADS)
Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe
2014-05-01
In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.
Machine for preparing phosphors for the fluorimetric determination of uranium
Stevens, R.E.; Wood, W.H.; Goetz, K.G.; Horr, C.A.
1956-01-01
The time saved by use of a machine for preparing many phosphors at one time increases the rate of productivity of the fluorimetric method for determining uranium. The machine prepares 18 phosphors at a time and eliminates the tedious and time-consuming step of preparing them by hand, while improving the precision of the method in some localities. The machine consists of a ring burner over which the platinum dishes, containing uranium and flux, are rotated. By placing the machine in an inclined position the molten flux comes into contact with all surfaces within th dish as the dishes rotate over the flame. Precision is improved because the heating and cooling conditions are the same for each of the 18 phosphors in one run as well as for successive runs.
Dynamic sound localization in cats
Ruhland, Janet L.; Jones, Amy E.
2015-01-01
Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772
Custodio, S.; Page, M.T.; Archuleta, R.J.
2009-01-01
We present a new method to combine static and wavefield data to image earthquake ruptures. Our combined inversion is a two-step procedure, following the work of Hernandez et al. (1999), and takes into account the differences between the resolutions of the two data sets. The first step consists of an inversion of the static field, which yields a map of slip amplitude. This inversion exploits a special irregular grid that takes into account the resolution of the static data. The second step is an inversion of the radiated wavefield; it results in the determination of the time evolution of slip on the fault. In the second step, the slip amplitude is constrained to resemble the static slip amplitude map inferred from the GPS inversion. Using this combined inversion, we study the source process of the 2004 M6 Parkfield, California, earthquake. We conclude that slip occurred in two main regions of the fault, each of which displayed distinct rupture behaviors. Slip initiated at the hypocenter with a very strong bilateral burst of energy. Here, slip was localized in a narrow area approximately 10 km long, the rupture velocity was very fast (???3.5 km/s), and slip only lasted a short period of time (<1 s). Then the rupture proceeded to a wider region 12-20 km northwest of the hypocenter. Here, the earthquake developed in a more moderated way: the rupture velocity slowed to ???3.0 km/s and slip lasted longer (1-2 s). The maximum slip amplitude was 0.45 m. Copyright 2009 by the American Geophysical Union.
Robust method to detect and locate local earthquakes by means of amplitude measurements.
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Brückl, Ewald
2016-04-01
In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic station. As a direct consequence, we are able to save computing time for the calculation of the final back-projected maximum resultant amplitude at every grid-point. The capability of the method was demonstrated firstly using synthetic data. In the next step, this method was applied to data of 43 local earthquakes of low and medium magnitude (1.7 < magnitude scale < 4.3). These earthquakes were recorded and detected by the seismic network ALPAACT (seismological and geodetic monitoring of Alpine PAnnonian ACtive Tectonics) in the period 2010/06/11 to 2013/09/20. Data provided by the ALPAACT network is used in order to understand seismic activity in the Mürz Valley - Semmering - Vienna Basin transfer fault system in Austria and what makes it such a relatively high earthquake hazard and risk area. The method will substantially support our efforts to involve scholars from polytechnic schools in seismological work within the Sparkling Science project Schools & Quakes.
Dermatas, Evangelos
2015-01-01
A novel method for finger vein pattern extraction from infrared images is presented. This method involves four steps: preprocessing which performs local normalization of the image intensity, image enhancement, image segmentation, and finally postprocessing for image cleaning. In the image enhancement step, an image which will be both smooth and similar to the original is sought. The enhanced image is obtained by minimizing the objective function of a modified separable Mumford Shah Model. Since, this minimization procedure is computationally intensive for large images, a local application of the Mumford Shah Model in small window neighborhoods is proposed. The finger veins are located in concave nonsmooth regions and, so, in order to distinct them from the other tissue parts, all the differences between the smooth neighborhoods, obtained by the local application of the model, and the corresponding windows of the original image are added. After that, veins in the enhanced image have been sufficiently emphasized. Thus, after image enhancement, an accurate segmentation can be obtained readily by a local entropy thresholding method. Finally, the resulted binary image may suffer from some misclassifications and, so, a postprocessing step is performed in order to extract a robust finger vein pattern. PMID:26120357
Using a two-step matrix solution to reduce the run time in KULL's magnetic diffusion package
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunner, T A; Kolev, T V
2010-12-17
Recently a Resistive Magnetohydrodynamics (MHD) package has been added to the KULL code. In order to be compatible with the underlying hydrodynamics algorithm, a new sub-zonal magnetics discretization was developed that supports arbitrary polygonal and polyhedral zones. This flexibility comes at the cost of many more unknowns per zone - approximately ten times more for a hexahedral mesh. We can eliminate some (or all, depending on the dimensionality) of the extra unknowns from the global matrix during assembly by using a Schur complement approach. This trades expensive global work for cache-friendly local work, while still allowing solution for the fullmore » system. Significant improvements in the solution time are observed for several test problems.« less
NASA Astrophysics Data System (ADS)
Minotti, Luca; Savaré, Giuseppe
2018-02-01
We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.
NASA Technical Reports Server (NTRS)
Eppink, Jenna L.; Wlezien, Richard W.; King, Rudolph A.; Choudhari, Meelan
2015-01-01
A low-speed experiment was performed on a swept at plate model with an imposed pressure gradient to determine the effect of a backward-facing step on transition in a stationary-cross flow dominated flow. Detailed hot-wire boundary-layer measurements were performed for three backward-facing step heights of approximately 36, 45, and 49% of the boundary-layer thickness at the step. These step heights correspond to a subcritical, nearly-critical, and critical case. Three leading-edge roughness configurations were tested to determine the effect of stationary-cross flow amplitude on transition. The step caused a local increase in amplitude of the stationary cross flow for the two larger step height cases, but farther downstream the amplitude decreased and remained below the baseline amplitude. The smallest step caused a slight local decrease in amplitude of the primary stationary cross flow mode, but the amplitude collapsed back to the baseline case far downstream of the step. The effect of the step on the amplitude of the primary cross flow mode increased with step height, however, the stationary cross flow amplitudes remained low and thus, stationary cross flow was not solely responsible for transition. Unsteady disturbances were present downstream of the step for all three step heights, and the amplitudes increased with increasing step height. The only exception is that the lower frequency (traveling crossflow-like) disturbance was not present in the lowest step height case. Positive and negative spikes in instantaneous velocity began to occur for the two larger step height cases and then grew in number and amplitude downstream of reattachment, eventually leading to transition. The number and amplitude of spikes varied depending on the step height and cross flow amplitude. Despite the low amplitude of the disturbances in the intermediate step height case, breakdown began to occur intermittently and the flow underwent a long transition region.
A practical guide for the identification of major sulcogyral structures of the human cortex.
Destrieux, Christophe; Terrier, Louis Marie; Andersson, Frédéric; Love, Scott A; Cottier, Jean-Philippe; Duvernoy, Henri; Velut, Stéphane; Janot, Kevin; Zemmoura, Ilyess
2017-05-01
The precise sulcogyral localization of cortical lesions is mandatory to improve communication between practitioners and to predict and prevent post-operative deficits. This process, which assumes a good knowledge of the cortex anatomy and a systematic analysis of images, is, nevertheless, sometimes neglected in the neurological and neurosurgical training. This didactic paper proposes a brief overview of the sulcogyral anatomy, using conventional MR-slices, and also reconstructions of the cortical surface after a more or less extended inflation process. This method simplifies the cortical anatomy by removing part of the cortical complexity induced by the folding process, and makes it more understandable. We then reviewed several methods for localizing cortical structures, and proposed a three-step identification: after localizing the lateral, medial or ventro-basal aspect of the hemisphere (step 1), the main interlobar sulci were located to limit the lobes (step 2). Finally, intralobar sulci and gyri were identified (step 3) thanks to the same set of rules. This paper does not propose any new identification method but should be regarded as a set of practical guidelines, useful in daily clinical practice, for detecting the main sulci and gyri of the human cortex.
An analysis of iterated local search for job-shop scheduling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitley, L. Darrell; Howe, Adele E.; Watson, Jean-Paul
2003-08-01
Iterated local search, or ILS, is among the most straightforward meta-heuristics for local search. ILS employs both small-step and large-step move operators. Search proceeds via iterative modifications to a single solution, in distinct alternating phases. In the first phase, local neighborhood search (typically greedy descent) is used in conjunction with the small-step operator to transform solutions into local optima. In the second phase, the large-step operator is applied to generate perturbations to the local optima obtained in the first phase. Ideally, when local neighborhood search is applied to the resulting solution, search will terminate at a different local optimum, i.e.,more » the large-step perturbations should be sufficiently large to enable escape from the attractor basins of local optima. ILS has proven capable of delivering excellent performance on numerous N P-Hard optimization problems. [LMS03]. However, despite its implicity, very little is known about why ILS can be so effective, and under what conditions. The goal of this paper is to advance the state-of-the-art in the analysis of meta-heuristics, by providing answers to this research question. They focus on characterizing both the relationship between the structure of the underlying search space and ILS performance, and the dynamic behavior of ILS. The analysis proceeds in the context of the job-shop scheduling problem (JSP) [Tai94]. They begin by demonstrating that the attractor basins of local optima in the JSP are surprisingly weak, and can be escaped with high probaiblity by accepting a short random sequence of less-fit neighbors. this result is used to develop a new ILS algorithms for the JSP, I-JAR, whose performance is competitive with tabu search on difficult benchmark instances. They conclude by developing a very accurate behavioral model of I-JAR, which yields significant insights into the dynamics of search. The analysis is based on a set of 100 random 10 x 10 problem instances, in addition to some widely used benchmark instances. Both I-JAR and the tabu search algorithm they consider are based on the N1 move operator introduced by van Laarhoven et al. [vLAL92]. The N1 operator induces a connected search space, such that it is always possible to move from an arbitrary solution to an optimal solution; this property is integral to the development of a behavioral model of I-JAR. However, much of the analysis generalizes to other move operators, including that of Nowicki and Smutnick [NS96]. Finally the models are based on the distance between two solutions, which they take as the well-known disjunctive graph distance [MBK99].« less
Markerless EPID image guided dynamic multi-leaf collimator tracking for lung tumors
NASA Astrophysics Data System (ADS)
Rottmann, J.; Keall, P.; Berbeco, R.
2013-06-01
Compensation of target motion during the delivery of radiotherapy has the potential to improve treatment accuracy, dose conformity and sparing of healthy tissue. We implement an online image guided therapy system based on soft tissue localization (STiL) of the target from electronic portal images and treatment aperture adaptation with a dynamic multi-leaf collimator (DMLC). The treatment aperture is moved synchronously and in real time with the tumor during the entire breathing cycle. The system is implemented and tested on a Varian TX clinical linear accelerator featuring an AS-1000 electronic portal imaging device (EPID) acquiring images at a frame rate of 12.86 Hz throughout the treatment. A position update cycle for the treatment aperture consists of four steps: in the first step at time t = t0 a frame is grabbed, in the second step the frame is processed with the STiL algorithm to get the tumor position at t = t0, in a third step the tumor position at t = ti + δt is predicted to overcome system latencies and in the fourth step, the DMLC control software calculates the required leaf motions and applies them at time t = ti + δt. The prediction model is trained before the start of the treatment with data representing the tumor motion. We analyze the system latency with a dynamic chest phantom (4D motion phantom, Washington University). We estimate the average planar position deviation between target and treatment aperture in a clinical setting by driving the phantom with several lung tumor trajectories (recorded from fiducial tracking during radiotherapy delivery to the lung). DMLC tracking for lung stereotactic body radiation therapy without fiducial markers was successfully demonstrated. The inherent system latency is found to be δt = (230 ± 11) ms for a MV portal image acquisition frame rate of 12.86 Hz. The root mean square deviation between tumor and aperture position is smaller than 1 mm. We demonstrate the feasibility of real-time markerless DMLC tracking with a standard LINAC-mounted (EPID).
Alomari, Yazan M.; MdZin, Reena Rahayu
2015-01-01
Analysis of whole-slide tissue for digital pathology images has been clinically approved to provide a second opinion to pathologists. Localization of focus points from Ki-67-stained histopathology whole-slide tissue microscopic images is considered the first step in the process of proliferation rate estimation. Pathologists use eye pooling or eagle-view techniques to localize the highly stained cell-concentrated regions from the whole slide under microscope, which is called focus-point regions. This procedure leads to a high variety of interpersonal observations and time consuming, tedious work and causes inaccurate findings. The localization of focus-point regions can be addressed as a clustering problem. This paper aims to automate the localization of focus-point regions from whole-slide images using the random patch probabilistic density method. Unlike other clustering methods, random patch probabilistic density method can adaptively localize focus-point regions without predetermining the number of clusters. The proposed method was compared with the k-means and fuzzy c-means clustering methods. Our proposed method achieves a good performance, when the results were evaluated by three expert pathologists. The proposed method achieves an average false-positive rate of 0.84% for the focus-point region localization error. Moreover, regarding RPPD used to localize tissue from whole-slide images, 228 whole-slide images have been tested; 97.3% localization accuracy was achieved. PMID:25793010
2014-01-01
Background National health surveys are sometimes used to provide estimates on risk factors for policy and program development at the regional/local level. However, as regional/local needs may differ from national ones, an important question is how to also enhance capacity for risk factor surveillance regionally/locally. Methods A Think Tank Forum was convened in Canada to discuss the needs, characteristics, coordination, tools and next steps to build capacity for regional/local risk factor surveillance. A series of follow up activities to review the relevant issues pertaining to needs, characteristics and capacity of risk factor surveillance were conducted. Results Results confirmed the need for a regional/local risk factor surveillance system that is flexible, timely, of good quality, having a communication plan, and responsive to local needs. It is important to conduct an environmental scan and a gap analysis, to develop a common vision, to build central and local coordination and leadership, to build on existing tools and resources, and to use innovation. Conclusions Findings of the Think Tank Forum are important for building surveillance capacity at the local/county level, both in Canada and globally. This paper provides a follow-up review of the findings based on progress over the last 4 years. PMID:24451555
Steps to Become a Green Power Community
Green Power Communities are a subset of the Green Power Partnership; municipalities or tribal governments where the local government, businesses, and residents collectively use enough green power to meet GPP requirements. Learn the steps to become a GPC.
Reconstruction of local perturbations in periodic surfaces
NASA Astrophysics Data System (ADS)
Lechleiter, Armin; Zhang, Ruming
2018-03-01
This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.
Method of Simulating Flow-Through Area of a Pressure Regulator
NASA Technical Reports Server (NTRS)
Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)
2011-01-01
The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.
Multichannel interictal spike activity detection using time-frequency entropy measure.
Thanaraj, Palani; Parvathavarthini, B
2017-06-01
Localization of interictal spikes is an important clinical step in the pre-surgical assessment of pharmacoresistant epileptic patients. The manual selection of interictal spike periods is cumbersome and involves a considerable amount of analysis workload for the physician. The primary focus of this paper is to automate the detection of interictal spikes for clinical applications in epilepsy localization. The epilepsy localization procedure involves detection of spikes in a multichannel EEG epoch. Therefore, a multichannel Time-Frequency (T-F) entropy measure is proposed to extract features related to the interictal spike activity. Least squares support vector machine is used to train the proposed feature to classify the EEG epochs as either normal or interictal spike period. The proposed T-F entropy measure, when validated with epilepsy dataset of 15 patients, shows an interictal spike classification accuracy of 91.20%, sensitivity of 100% and specificity of 84.23%. Moreover, the area under the curve of Receiver Operating Characteristics plot of 0.9339 shows the superior classification performance of the proposed T-F entropy measure. The results of this paper show a good spike detection accuracy without any prior information about the spike morphology.
Recent advances in high-order WENO finite volume methods for compressible multiphase flows
NASA Astrophysics Data System (ADS)
Dumbser, Michael
2013-10-01
We present two new families of better than second order accurate Godunov-type finite volume methods for the solution of nonlinear hyperbolic partial differential equations with nonconservative products. One family is based on a high order Arbitrary-Lagrangian-Eulerian (ALE) formulation on moving meshes, which allows to resolve the material contact wave in a very sharp way when the mesh is moved at the speed of the material interface. The other family of methods is based on a high order Adaptive Mesh Refinement (AMR) strategy, where the mesh can be strongly refined in the vicinity of the material interface. Both classes of schemes have several building blocks in common, in particular: a high order WENO reconstruction operator to obtain high order of accuracy in space; the use of an element-local space-time Galerkin predictor step which evolves the reconstruction polynomials in time and that allows to reach high order of accuracy in time in one single step; the use of a path-conservative approach to treat the nonconservative terms of the PDE. We show applications of both methods to the Baer-Nunziato model for compressible multiphase flows.
The way from microscopic many-particle theory to macroscopic hydrodynamics.
Haussmann, Rudolf
2016-03-23
Starting from the microscopic description of a normal fluid in terms of any kind of local interacting many-particle theory we present a well defined step by step procedure to derive the hydrodynamic equations for the macroscopic phenomena. We specify the densities of the conserved quantities as the relevant hydrodynamic variables and apply the methods of non-equilibrium statistical mechanics with projection operator techniques. As a result we obtain time-evolution equations for the hydrodynamic variables with three kinds of terms on the right-hand sides: reversible, dissipative and fluctuating terms. In their original form these equations are completely exact and contain nonlocal terms in space and time which describe nonlocal memory effects. Applying a few approximations the nonlocal properties and the memory effects are removed. As a result we find the well known hydrodynamic equations of a normal fluid with Gaussian fluctuating forces. In the following we investigate if and how the time-inversion invariance is broken and how the second law of thermodynamics comes about. Furthermore, we show that the hydrodynamic equations with fluctuating forces are equivalent to stochastic Langevin equations and the related Fokker-Planck equation. Finally, we investigate the fluctuation theorem and find a modification by an additional term.
Morris, Christopher; Hoogenes, Jen; Shayegan, Bobby; Matsumoto, Edward D
2017-01-01
As urology training shifts toward competency-based frameworks, the need for tools for high stakes assessment of trainees is crucial. Validated assessment metrics are lacking for many robot-assisted radical prostatectomy (RARP). As it is quickly becoming the gold standard for treatment of localized prostate cancer, the development and validation of a RARP assessment tool for training is timely. We recruited 13 expert RARP surgeons from the United States and Canada to serve as our Delphi panel. Using an initial inventory developed via a modified Delphi process with urology residents, fellows, and staff at our institution, panelists iteratively rated each step and sub-step on a 5-point Likert scale of agreement for inclusion in the final assessment tool. Qualitative feedback was elicited for each item to determine proper step placement, wording, and suggestions. Panelist's responses were compiled and the inventory was edited through three iterations, after which 100% consensus was achieved. The initial inventory steps were decreased by 13% and a skip pattern was incorporated. The final RARP stepwise inventory was comprised of 13 critical steps with 52 sub-steps. There was no attrition throughout the Delphi process. Our Delphi study resulted in a comprehensive inventory of intraoperative RARP steps with excellent consensus. This final inventory will be used to develop a valid and psychometrically sound intraoperative assessment tool for use during RARP training and evaluation, with the aim of increasing competency of all trainees. Copyright® by the International Brazilian Journal of Urology.
Morris, Christopher; Hoogenes, Jen; Shayegan, Bobby; Matsumoto, Edward D.
2017-01-01
ABSTRACT Introduction As urology training shifts toward competency-based frameworks, the need for tools for high stakes assessment of trainees is crucial. Validated assessment metrics are lacking for many robot-assisted radical prostatectomy (RARP). As it is quickly becoming the gold standard for treatment of localized prostate cancer, the development and validation of a RARP assessment tool for training is timely. Materials and methods We recruited 13 expert RARP surgeons from the United States and Canada to serve as our Delphi panel. Using an initial inventory developed via a modified Delphi process with urology residents, fellows, and staff at our institution, panelists iteratively rated each step and sub-step on a 5-point Likert scale of agreement for inclusion in the final assessment tool. Qualitative feedback was elicited for each item to determine proper step placement, wording, and suggestions. Results Panelist’s responses were compiled and the inventory was edited through three iterations, after which 100% consensus was achieved. The initial inventory steps were decreased by 13% and a skip pattern was incorporated. The final RARP stepwise inventory was comprised of 13 critical steps with 52 sub-steps. There was no attrition throughout the Delphi process. Conclusions Our Delphi study resulted in a comprehensive inventory of intraoperative RARP steps with excellent consensus. This final inventory will be used to develop a valid and psychometrically sound intraoperative assessment tool for use during RARP training and evaluation, with the aim of increasing competency of all trainees. PMID:28379668
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, F.; Banks, J. W.; Henshaw, W. D.
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
Numerical simulation of the kinetic effects in the solar wind
NASA Astrophysics Data System (ADS)
Sokolov, I.; Toth, G.; Gombosi, T. I.
2017-12-01
Global numerical simulations of the solar wind are usually based on the ideal or resistive MagnetoHydroDynamics (MHD) equations. Within a framework of MHD the electric field is assumed to vanish in the co-moving frame of reference (ideal MHD) or to obey a simple and non-physical scalar Ohm's law (resistive MHD). The Maxwellian distribution functions are assumed, the electron and ion temperatures may be different. Non-disversive MHD waves can be present in this numerical model. The averaged equations for MHD turbulence may be included as well as the energy and momentum exchange between the turbulent and regular motion. With the use of explicit numerical scheme, the time step is controlled by the MHD wave propagtion time across the numerical cell (the CFL condition) More refined approach includes the Hall effect vie the generalized Ohm's law. The Lorentz force acting on light electrons is assumed to vanish, which gives the expression for local electric field in terms of the total electric current, the ion current as well as the electron pressure gradient and magnetic field. The waves (whistlers, ion-cyclotron waves etc) aquire dispersion and the short-wavelength perturbations propagate with elevated speed thus strengthening the CFL condition. If the grid size is sufficiently small to resolve ion skindepth scale, then the timestep is much shorter than the ion gyration period. The next natural step is to use hybrid code to resolve the ion kinetic effects. The hybrid numerical scheme employs the same generalized Ohm's law as Hall MHD and suffers from the same constraint on the time step while solving evolution of the electromagnetic field. The important distiction, however, is that by sloving particle motion for ions we can achieve more detailed description of the kinetic effect without significant degrade in the computational efficiency, because the time-step is sufficient to resolve the particle gyration. We present the fisrt numerical results from coupled BATS-R-US+ALTOR code as applied to kinetic simulations of the solar wind.
NASA Astrophysics Data System (ADS)
Skowronek, Sandra; Van De Kerchove, Ruben; Rombouts, Bjorn; Aerts, Raf; Ewald, Michael; Warrie, Jens; Schiefer, Felix; Garzon-Lopez, Carol; Hattab, Tarek; Honnay, Olivier; Lenoir, Jonathan; Rocchini, Duccio; Schmidtlein, Sebastian; Somers, Ben; Feilhauer, Hannes
2018-06-01
Remote sensing is a promising tool for detecting invasive alien plant species. Mapping and monitoring those species requires accurate detection. So far, most studies relied on models that are locally calibrated and validated against available field data. Consequently, detecting invasive alien species at new study areas requires the acquisition of additional field data which can be expensive and time-consuming. Model transfer might thus provide a viable alternative. Here, we mapped the distribution of the invasive alien bryophyte Campylopus introflexus to i) assess the feasibility of spatially transferring locally calibrated models for species detection between four different heathland areas in Germany and Belgium and ii) test the potential of combining calibration data from different sites in one species distribution model (SDM). In a first step, four different SDMs were locally calibrated and validated by combining field data and airborne imaging spectroscopy data with a spatial resolution ranging from 1.8 m to 4 m and a spectral resolution of about 10 nm (244 bands). A one-class classifier, Maxent, which is based on the comparison of probability densities, was used to generate all SDMs. In a second step, each model was transferred to the three other study areas and the performance of the models for predicting C. introflexus occurrences was assessed. Finally, models combining calibration data from three study areas were built and tested on the remaining fourth site. In this step, different combinations of Maxent modelling parameters were tested. For the local models, the area under the curve for a test dataset (test AUC) was between 0.57-0.78, while the test AUC for the single transfer models ranged between 0.45-0.89. For the combined models the test AUC was between 0.54-0.9. The success of transferring models calibrated in one site to another site highly depended on the respective study site; the combined models provided higher test AUC values than the locally calibrated models for three out of four study sites. Furthermore, we also demonstrated the importance of optimizing the Maxent modelling parameters. Overall, our results indicate the potential of a combined model to map C. introflexus without the need for new calibration data.
Parks, Renee G; Tabak, Rachel G; Allen, Peg; Baker, Elizabeth A; Stamatakis, Katherine A; Poehler, Allison R; Yan, Yan; Chin, Marshall H; Harris, Jenine K; Dobbins, Maureen; Brownson, Ross C
2017-10-18
The rates of diabetes and prediabetes in the USA are growing, significantly impacting the quality and length of life of those diagnosed and financially burdening society. Premature death and disability can be prevented through implementation of evidence-based programs and policies (EBPPs). Local health departments (LHDs) are uniquely positioned to implement diabetes control EBPPs because of their knowledge of, and focus on, community-level needs, contexts, and resources. There is a significant gap, however, between known diabetes control EBPPs and actual diabetes control activities conducted by LHDs. The purpose of this study is to determine how best to support the use of evidence-based public health for diabetes (and related chronic diseases) control among local-level public health practitioners. This paper describes the methods for a two-phase study with a stepped-wedge cluster randomized trial that will evaluate dissemination strategies to increase the uptake of public health knowledge and EBPPs for diabetes control among LHDs. Phase 1 includes development of measures to assess practitioner views on and organizational supports for evidence-based public health, data collection using a national online survey of LHD chronic disease practitioners, and a needs assessment of factors influencing the uptake of diabetes control EBPPs among LHDs within one state in the USA. Phase 2 involves conducting a stepped-wedge cluster randomized trial to assess effectiveness of dissemination strategies with local-level practitioners at LHDs to enhance capacity and organizational support for evidence-based diabetes prevention and control. Twelve LHDs will be selected and randomly assigned to one of the three groups that cross over from usual practice to receive the intervention (dissemination) strategies at 8-month intervals; the intervention duration for groups ranges from 8 to 24 months. Intervention (dissemination) strategies may include multi-day in-person workshops, electronic information exchange methods, technical assistance through a knowledge broker, and organizational changes to support evidence-based public health approaches. Evaluation methods comprise surveys at baseline and the three crossover time points, abstraction of local-level diabetes and chronic disease control program plans and progress reports, and social network analysis to understand the relationships and contextual issues that influence EBPP adoption. ClinicalTrial.gov, NCT03211832.
Extended Lagrangian formulation of charge-constrained tight-binding molecular dynamics.
Cawkwell, M J; Coe, J D; Yadav, S K; Liu, X-Y; Niklasson, A M N
2015-06-09
The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [Niklasson, Phys. Rev. Lett., 2008, 100, 123004] has been applied to a tight-binding model under the constraint of local charge neutrality to yield microcanonical trajectories with both precise, long-term energy conservation and a reduced number of self-consistent field optimizations at each time step. The extended Lagrangian molecular dynamics formalism restores time reversal symmetry in the propagation of the electronic degrees of freedom, and it enables the efficient and accurate self-consistent optimization of the chemical potential and atomwise potential energy shifts in the on-site elements of the tight-binding Hamiltonian that are required when enforcing local charge neutrality. These capabilities are illustrated with microcanonical molecular dynamics simulations of a small metallic cluster using an sd-valent tight-binding model for titanium. The effects of weak dissipation on the propagation of the auxiliary degrees of freedom for the chemical potential and on-site Hamiltonian matrix elements that is used to counteract the accumulation of numerical noise during trajectories was also investigated.
Kearns, Kenneth L; Swallen, Stephen F; Ediger, M D; Sun, Ye; Yu, Lian
2009-02-12
Indomethacin glasses of varying stabilities were prepared by physical vapor deposition onto substrates at 265 K. Enthalpy relaxation and the mobility onset temperature were assessed with differential scanning calorimetry (DSC). Quasi-isothermal temperature-modulated DSC was used to measure the reversing heat capacity during annealing above the glass transition temperature Tg. At deposition rates near 8 A/s, scanning DSC shows two enthalpy relaxation peaks and quasi-isothermal DSC shows a two-step change in the reversing heat capacity. We attribute these features to two distinct local packing structures in the vapor-deposited glass, and this interpretation is supported by the strong correlation between the two calorimetric signatures of the glass to liquid transformation. At lower deposition rates, a larger fraction of the sample is prepared in the more stable local packing. The transformation of the vapor-deposited glasses into the supercooled liquid above Tg is exceedingly slow, as much as 4500 times slower than the structural relaxation time of the liquid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pairsuwan, Weerapong
A short history of the SIAM Photon Source in Thailand is described. The facility is based on the 1 GeV storage ring obtained from the SORTEC consortium in Japan. After a redesign to include insertion straight sections it produced the first light in December 2001 and the first beam line became operational in early 2002. Special difficulties appear when a synchrotron light facility is obtained by donation, which have mostly to do with the absence of human resource development that elsewhere is commonly accomplished during design and construction. Additional problems arise by the distance of a developing country like Thailandmore » from the origin of technical parts of the donation. A donation does not provide time to generate local capabilities or include in the technical design locally obtainable parts. This makes future developments, repairs and maintenance more time consuming, difficult and expensive than it should be. In other cases, parts of components are proprietary or obsolete or both which requires redesign and engineering at a time when the replacement part should be available to prevent stoppage of operation.The build-up of a user community is very difficult, especially when the radiation spectrum is confined to the VUV regime. Most of scientific interest these days is focused on the x-ray regime. Due to its low beam energy, the SIAM storage ring did not produce useful x-ray intensities and we are therefore in the midst of an upgrade to produce harder radiation. The first step has been achieved with a 20% increase of energy to 1.2 GeV. This step shifts the critical photon energy of bending magnet radiation from 800 eV to 1.4 keV providing useful radiation up to 7 keV. A XAS-beam line has been completed in 2005 and experimentation is very active by now. The next step is to install a 6.4 T wavelength shifter by the end of 2006 resulting in a critical photon energy of 6.15 keV. Further upgrades are planed for the comming years.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghav, Anil; Lotekar, Ajay; Bhaskar, Ankush
We have studied the Forbush decrease (FD) event that occurred on February 14, 1978 using 43 neutron monitor observatories to understand the global signature of FD. We have studied rigidity dependence of shock amplitude and total FD amplitude. We have found almost the same power law index for both shock phase amplitude and total FD amplitude. Local time variation of shock phase amplitude and maximum depression time of FD have been investigated which indicate possible effect of shock/CME orientation. We have analyzed rigidity dependence of time constants of two phase recovery. Time constants of slow component of recovery phase showmore » rigidity dependence and imply possible effect of diffusion. Solar wind speed was observed to be well correlated with slow component of FD recovery phase. This indicates solar wind speed as possible driver of recovery phase. To investigate the contribution of interplanetary drivers, shock and CME in FD, we have used shock-only and CME-only models. We have applied these models separately to shock phase and main phase amplitudes respectively. This confirms presently accepted physical scenario that the first step of FD is due to propagating shock barrier and second step is due to flux rope of CME/magnetic cloud.« less
NASA Astrophysics Data System (ADS)
Noble, David R.; Georgiadis, John G.; Buckius, Richard O.
1996-07-01
The lattice Boltzmann method (LBM) is used to simulate flow in an infinite periodic array of octagonal cylinders. Results are compared with those obtained by a finite difference (FD) simulation solved in terms of streamfunction and vorticity using an alternating direction implicit scheme. Computed velocity profiles are compared along lines common to both the lattice Boltzmann and finite difference grids. Along all such slices, both streamwise and transverse velocity predictions agree to within 05% of the average streamwise velocity. The local shear on the surface of the cylinders also compares well, with the only deviations occurring in the vicinity of the corners of the cylinders, where the slope of the shear is discontinuous. When a constant dimensionless relaxation time is maintained, LBM exhibits the same convergence behaviour as the FD algorithm, with the time step increasing as the square of the grid size. By adjusting the relaxation time such that a constant Mach number is achieved, the time step of LBM varies linearly with the grid size. The efficiency of LBM on the CM-5 parallel computer at the National Center for Supercomputing Applications (NCSA) is evaluated by examining each part of the algorithm. Overall, a speed of 139 GFLOPS is obtained using 512 processors for a domain size of 2176×2176.
Video Completion in Digital Stabilization Task Using Pseudo-Panoramic Technique
NASA Astrophysics Data System (ADS)
Favorskaya, M. N.; Buryachenko, V. V.; Zotin, A. G.; Pakhirka, A. I.
2017-05-01
Video completion is a necessary stage after stabilization of a non-stationary video sequence, if it is desirable to make the resolution of the stabilized frames equalled the resolution of the original frames. Usually the cropped stabilized frames lose 10-20% of area that means the worse visibility of the reconstructed scenes. The extension of a view of field may appear due to the pan-tilt-zoom unwanted camera movement. Our approach deals with a preparing of pseudo-panoramic key frame during a stabilization stage as a pre-processing step for the following inpainting. It is based on a multi-layered representation of each frame including the background and objects, moving differently. The proposed algorithm involves four steps, such as the background completion, local motion inpainting, local warping, and seamless blending. Our experiments show that a necessity of a seamless stitching occurs often than a local warping step. Therefore, a seamless blending was investigated in details including four main categories, such as feathering-based, pyramid-based, gradient-based, and optimal seam-based blending.
Caltrans : transit funding manual : managing the delivery of transit projects
DOT National Transportation Integrated Search
2001-05-01
This manual attempts to provide a step by step transit funding process. Included in this manual : is an overview of Caltrans Division of Mass Transportation, roles and responsibilities in : assisting local agencies to deliver transit projects. Transi...
Methods, systems and devices for detecting and locating ferromagnetic objects
Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID
2010-01-26
Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.
NASA Astrophysics Data System (ADS)
Braud, Isabelle; Fuamba, Musandji; Branger, Flora; Batchabani, Essoyéké; Sanzana, Pedro; Sarrazin, Benoit; Jankowfsky, Sonja
2016-04-01
Distributed hydrological models are used at best when their outputs are compared not only to the outlet discharge, but also to internal observed variables, so that they can be used as powerful hypothesis-testing tools. In this paper, the interest of distributed networks of sensors for evaluating a distributed model and the underlying functioning hypotheses is explored. Two types of data are used: surface soil moisture and water level in streams. The model used in the study is the periurban PUMMA (Peri-Urban Model for landscape Management, Jankowfsky et al., 2014), that is applied to the Mercier catchment (6.7 km2) a semi-rural catchment with 14% imperviousness, located close to Lyon, France where distributed water level (13 locations) and surface soil moisture data (9 locations) are available. Model parameters are specified using in situ information or the results of previous studies, without any calibration and the model is run for four years from January 1st 2007 to December 31st 2010 with a variable time step for rainfall and an hourly time step for reference evapotranspiration. The model evaluation protocol was guided by the available data and how they can be interpreted in terms of hydrological processes and constraints for the model components and parameters. We followed a stepwise approach. The first step was a simple model water balance assessment, without comparison to observed data. It can be interpreted as a basic quality check for the model, ensuring that it conserves mass, makes the difference between dry and wet years, and reacts to rainfall events. The second step was an evaluation against observed discharge data at the outlet, using classical performance criteria. It gives a general picture of the model performance and allows to comparing it to other studies found in the literature. In the next steps (steps 3 to 6), focus was made on more specific hydrological processes. In step 3, distributed surface soil moisture data was used to assess the relevance of the simulated seasonal soil water storage dynamics. In step 4, we evaluated the base flow generation mechanisms in the model through comparison with continuous water level data transformed into stream intermittency statistics. In step 5, the water level data was used again but at the event time scale, to evaluate the fast flow generation components through comparison of modelled and observed reaction and response times. Finally, in step 6, we studied correlation between observed and simulated reaction and response times and various characteristics of the rainfall events (rain volume, intensity) and antecedent soil moisture, to see if the model was able to reproduce the observed features as described in Sarrazin (2012). The results show that the model is able to represent satisfactorily the soil water storage dynamics and stream intermittency. On the other hand, the model does not reproduce the response times and the difference in response between forested and agricultural areas. References: Jankowfsky et al., 2014. Assessing anthropogenic influence on the hydrology of small peri-urban catchments: Development of the object-oriented PUMMA model by integrating urban and rural hydrological models. J. Hydrol., 517, 1056-1071 Sarrazin, B., 2012. MNT et observations multi-locales du réseau hydrographique d'un petit bassin versant rural dans une perspective d'aide à la modélisation hydrologique. Ecole doctorale Terre, Univers, Environnement. l'Institut National Polytechnique de Grenoble, 269 pp (in French).
Local Structure Fixation in the Composite Manufacturing Chain
NASA Astrophysics Data System (ADS)
Girdauskaite, Lina; Krzywinski, Sybille; Rödel, Hartmut; Wildasin-Werner, Andrea; Böhme, Ralf; Jansen, Irene
2010-12-01
Compared to metal materials, textile reinforced composites show interesting features, but also higher production costs because of low automation rate in the manufacturing chain at this time. Their applicability is also limited due to quality problems, which restrict the production of complex shaped dry textile preforms. New technologies, design concepts, and cost-effective manufacturing methods are needed in order to establish further fields of application. This paper deals with possible ways to improve the textile deformation process by locally applying a fixative to the structure parallel to the cut. This hinders unwanted deformation in the textile stock during the subsequent stacking and formation steps. It is found that suitable thermoplastic binders, applied in the appropriate manner do not restrict formation of the textile and have no negative influence on the mechanical properties of the composite.
Stability analysis of the Euler discretization for SIR epidemic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanto, Agus
2014-06-19
In this paper we consider a discrete SIR epidemic model obtained by the Euler method. For that discrete model, existence of disease free equilibrium and endemic equilibrium is established. Sufficient conditions on the local asymptotical stability of both disease free equilibrium and endemic equilibrium are also derived. It is found that the local asymptotical stability of the existing equilibrium is achieved only for a small time step size h. If h is further increased and passes the critical value, then both equilibriums will lose their stability. Our numerical simulations show that a complex dynamical behavior such as bifurcation or chaosmore » phenomenon will appear for relatively large h. Both analytical and numerical results show that the discrete SIR model has a richer dynamical behavior than its continuous counterpart.« less
Wiring up pre-characterized single-photon emitters by laser lithography
NASA Astrophysics Data System (ADS)
Shi, Q.; Sontheimer, B.; Nikolay, N.; Schell, A. W.; Fischer, J.; Naber, A.; Benson, O.; Wegener, M.
2016-08-01
Future quantum optical chips will likely be hybrid in nature and include many single-photon emitters, waveguides, filters, as well as single-photon detectors. Here, we introduce a scalable optical localization-selection-lithography procedure for wiring up a large number of single-photon emitters via polymeric photonic wire bonds in three dimensions. First, we localize and characterize nitrogen vacancies in nanodiamonds inside a solid photoresist exhibiting low background fluorescence. Next, without intermediate steps and using the same optical instrument, we perform aligned three-dimensional laser lithography. As a proof of concept, we design, fabricate, and characterize three-dimensional functional waveguide elements on an optical chip. Each element consists of one single-photon emitter centered in a crossed-arc waveguide configuration, allowing for integrated optical excitation and efficient background suppression at the same time.
P300 and LORETA: comparison of normal subjects and schizophrenic patients.
Winterer, G; Mulert, C; Mientus, S; Gallinat, J; Schlattmann, P; Dorn, H; Herrmann, W M
2001-01-01
It was the aim of the present study 1) to investigate how many cortical activity maxima of scalp-recorded P300 are detected by Low Resolution Electromagentic Tomography (LORETA) when analyses are performed with high time-resolution, 2) to see if the resulting LORETA-solution is in accordance with intracortical recordings as reported by others and 3) to compare the given pattern of cortical activation maxima in the P300-timeframe between schizophrenic patients and normal controls. Current density analysis was performed in 3-D Talairach space with high time resolution i.e. in 6 ms steps. This was done during an auditory choice reaction paradigm separately for normal subjects and schizophrenic patients with subsequent group comparisons. In normal subjects, a sequence of at least seven cortical activation maxima was found between 240-420ms poststimulus: the prefrontal cortex, anterior or medial cingulum, posterior cingulum, parietal cortex, temporal lobe, prefrontal cortex, medial or anterior cingulum. Within the given limits of spatial resolution, this sequential maxima distribution largely met the expectations from reports on intracranial recordings and functional neuroimaging studies. However, localization accuracy was higher near the central midline than at lateral aspects of the brain. Schizophrenic patients less activated their cortex in a widespread area mainly in the left hemisphere including the prefrontal cortex, posterior cingulum and the temporal lobe. From these analyses and comparsions with intracranial recordings as reported by others, it is concluded that LORETA correctly localizes P300-related cortical activity maxima on the basis of 19 electrodes except for lateral cortical aspects which is most likely an edge-phenomenon. The data further suggest that the P300-deficit in schizophrenics involves an extended cortical network of the left hemisphere at several steps in time during the information processing stream.
Navarro-Ramirez, Rodrigo; Lang, Gernot; Lian, Xiaofeng; Berlin, Connor; Janssen, Insa; Jada, Ajit; Alimi, Marjan; Härtl, Roger
2017-04-01
Portable intraoperative computed tomography (iCT) with integrated 3-dimensional navigation (NAV) offers new opportunities for more precise navigation in spinal surgery, eliminates radiation exposure for the surgical team, and accelerates surgical workflows. We present the concept of "total navigation" using iCT NAV in spinal surgery. Therefore, we propose a step-by-step guideline demonstrating how total navigation can eliminate fluoroscopy with time-efficient workflows integrating iCT NAV into daily practice. A prospective study was conducted on collected data from patients undergoing iCT NAV-guided spine surgery. Number of scans, radiation exposure, and workflow of iCT NAV (e.g., instrumentation, cage placement, localization) were documented. Finally, the accuracy of pedicle screws and time for instrumentation were determined. iCT NAV was successfully performed in 117 cases for various indications and in all regions of the spine. More than half (61%) of cases were performed in a minimally invasive manner. Navigation was used for skin incision, localization of index level, and verification of implant position. iCT NAV was used to evaluate neural decompression achieved in spinal fusion surgeries. Total navigation eliminates fluoroscopy in 75%, thus reducing staff radiation exposure entirely. The average times for iCT NAV setup and pedicle screw insertion were 12.1 and 3.1 minutes, respectively, achieving a pedicle screw accuracy of 99%. Total navigation makes spine surgery safer and more accurate, and it enhances efficient and reproducible workflows. Fluoroscopy and radiation exposure for the surgical staff can be eliminated in the majority of cases. Copyright © 2017 Elsevier Inc. All rights reserved.
A groundwater data assimilation application study in the Heihe mid-reach
NASA Astrophysics Data System (ADS)
Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.
2017-12-01
The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.
NASA Astrophysics Data System (ADS)
Jiang, Junjun; Hu, Ruimin; Han, Zhen; Wang, Zhongyuan; Chen, Jun
2013-10-01
Face superresolution (SR), or face hallucination, refers to the technique of generating a high-resolution (HR) face image from a low-resolution (LR) one with the help of a set of training examples. It aims at transcending the limitations of electronic imaging systems. Applications of face SR include video surveillance, in which the individual of interest is often far from cameras. A two-step method is proposed to infer a high-quality and HR face image from a low-quality and LR observation. First, we establish the nonlinear relationship between LR face images and HR ones, according to radial basis function and partial least squares (RBF-PLS) regression, to transform the LR face into the global face space. Then, a locality-induced sparse representation (LiSR) approach is presented to enhance the local facial details once all the global faces for each LR training face are constructed. A comparison of some state-of-the-art SR methods shows the superiority of the proposed two-step approach, RBF-PLS global face regression followed by LiSR-based local patch reconstruction. Experiments also demonstrate the effectiveness under both simulation conditions and some real conditions.
2005-03-01
team-wide accountability and rewards Functional focus Group accountability and rewards Employee-owner interest conflicts Process focus Lack of...Collaborative and cross-functional work Incompatible IT Need to share Compartmentalization of functional groups Localized decision making Centralized...Steps are: • Step 1: Analyze Corporate Strategic Objectives Using SWOT (Strengths, Weaknesses, Opportunities, Threats) Methodology • Step 2
Albrecht, H; Schwecht, M; Pöllmann, W; Parag, D; Erasmus, L P; König, N
1998-12-01
Upper limb ataxia is one of the most disabling symptoms of patients with multiple sclerosis (MS). There are some clinically tested therapeutic strategies, especially with regard to cerebellar tremor. But most of the methods used for treatment of limb ataxia in physiotherapy and occupational therapy are not systematically evaluated, e.g. the effect of local ice applications, as reported by MS patients and therapists, respectively. We investigated 21 MS patients before and in several steps 1 up to 45 min after cooling the most affected forearm. We used a series of 6 tests, including parts of neurological status and activities of daily living as well. At each step skin temperature and nerve conduction velocity were recorded. All tests were documented by video for later offline analysis. Standardized evaluation was done by the investigators and separately by an independent second team, both of them using numeric scales for quality of performance. After local cooling all patients showed a positive effect, especially a reduction of intentional tremor. In most cases this effect lasted 45 min, in some patients even longer. We presume that a decrease in the proprioceptive afferent inflow-induced by cooling-may be the probable cause of this reduction of cerebellar tremor. Patients can use ice applications as a method of treating themselves when a short-time reduction of intention tremor is required, e.g. for typing, signing or self-catheterization.
Radiological incident preparedness: planning at the local level.
Tan, Clive M; Barnett, Daniel J; Stolz, Adam J; Links, Jonathan M
2011-03-01
Radiological terrorism has been recognized as a probable scenario with high impact. Radiological preparedness planning at the federal and state levels has been encouraging, but translating complex doctrines into operational readiness at the local level has proved challenging. Based on the authors' experience with radiological response planning for the City of Baltimore, this article describes an integrated approach to municipal-level radiological emergency preparedness planning, provides information on resources that are useful for radiological preparedness planning, and recommends a step-by-step process toward developing the plan with relevant examples from the experience in Baltimore. Local governmental agencies constitute the first line of response and are critical to the success of the operation. This article is intended as a starting framework for local governmental efforts toward developing a response plan for radiological incidents in their communities.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
[Transanal endocopic microsurgery (TEM) in advanced rectal cancer disease treatment].
Paci, Marcello; Scoglio, Daniele; Ursi, Pietro; Barchetti, Luciana; Fabiani, Bernardina; Ascoli, Giada; Lezoche, Giovanni
2010-01-01
After Heald's revolution in 1982, who introduced the total mesorectal excision, for improve the results in terms of recurrance and survival rate, there is a need to explore new therapeutic options in treatment of sub-peritoneal rectal cancer. In particular, local excision represent more often a valid technique for non advanced rectal cancer treatment in comparison with the more invasive procedure, especially in elderly and/or in poor health patients. The introduction of TEM by Buess (transanal endoscopy microsurgery), has extended the local treatment also to classes of patients who would normally have been candidates for TME. The author gives literature's details and his experience in the use of TEM for early rectal cancer sub-peritoneal. The aim of the study is to analyze short and long term results in terms of local recurrence and survival rate comparing TEM technique with the other transanal surgery in rectal cancer treatment. Preoperative Chemio-Radio therapy and rigorous Imaging Staging are the first steps to planning surgery. It's time, for local rectal cancer, has come to make the devolution a few decades ago has been accomplished in the treatment of breast cancer
Efficient iris recognition by characterizing key local variations.
Ma, Li; Tan, Tieniu; Wang, Yunhong; Zhang, Dexin
2004-06-01
Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper describes an efficient algorithm for iris recognition by characterizing key local variations. The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure, are utilized to represent the characteristics of the iris. The whole procedure of feature extraction includes two steps: 1) a set of one-dimensional intensity signals is constructed to effectively characterize the most important information of the original two-dimensional image; 2) using a particular class of wavelets, a position sequence of local sharp variation points in such signals is recorded as features. We also present a fast matching scheme based on exclusive OR operation to compute the similarity between a pair of position sequences. Experimental results on 2255 iris images show that the performance of the proposed method is encouraging and comparable to the best iris recognition algorithm found in the current literature.
A Methodology for Meta-Analysis of Local Climate Change Adaptation Policies
Local governments are beginning to take steps to address the consequences of climate change, such as sea level rise and heat events. However, we donot have a clear understanding of what local governments are doing -- the extent to which they expect climate change to affect their ...
A Meta-Analysis of Local Climate Change Adaptation Actions
Local governments are beginning to take steps to address the consequences of climate change, such as sea level rise and heat events. However, we do not have a clear understanding of what local governments are doing -- the extent to which they expect climate change to affect their...
Maintaining a Local Data Integration System in Support of Weather Forecast Operations
NASA Technical Reports Server (NTRS)
Watson, Leela R.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian
2010-01-01
Since 2000, both the National Weather Service in Melbourne, FL (NWS MLB) and the Spaceflight Meteorology Group (SMG) have used a local data integration system (LDIS) as part of their forecast and warning operations. Each has benefited from 3-dimensional analyses that are delivered to forecasters every 15 minutes across the peninsula of Florida. The intent is to generate products that enhance short-range weather forecasts issued in support of NWS MLB and SMG operational requirements within East Central Florida. The current LDIS uses the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) package as its core, which integrates a wide variety of national, regional, and local observational data sets. It assimilates all available real-time data within its domain and is run at a finer spatial and temporal resolution than current national- or regional-scale analysis packages. As such, it provides local forecasters with a more comprehensive and complete understanding of evolving fine-scale weather features. Recent efforts have been undertaken to update the LDIS through the formal tasking process of NASA's Applied Meteorology Unit. The goals include upgrading LDIS with the latest version of ADAS, incorporating new sources of observational data, and making adjustments to shell scripts written to govern the system. A series of scripts run a complete modeling system consisting of the preprocessing step, the main model integration, and the post-processing step. The preprocessing step prepares the terrain, surface characteristics data sets, and the objective analysis for model initialization. Data ingested through ADAS include (but are not limited to) Level II Weather Surveillance Radar- 1988 Doppler (WSR-88D) data from six Florida radars, Geostationary Operational Environmental Satellites (GOES) visible and infrared satellite imagery, surface and upper air observations throughout Florida from NOAA's Earth System Research Laboratory/Global Systems Division/Meteorological Assimilation Data Ingest System (MADIS), as well as the Kennedy Space Center ICape Canaveral Air Force Station wind tower network. The scripts provide NWS MLB and SMG with several options for setting a desirable runtime configuration of the LDIS to account for adjustments in grid spacing, domain location, choice of observational data sources, and selection of background model fields, among others. The utility of an improved LDIS will be demonstrated through postanalysis warm and cool season case studies that compare high-resolution model output with and without the ADAS analyses. Operationally, these upgrades will result in more accurate depictions of the current local environment to help with short-range weather forecasting applications, while also offering an improved initialization for local versions of the Weather Research and Forecasting model.
Persistent coexistence of cyclically competing species in spatially extended ecosystems
NASA Astrophysics Data System (ADS)
Park, Junpyo; Do, Younghae; Huang, Zi-Gang; Lai, Ying-Cheng
2013-06-01
A fundamental result in the evolutionary-game paradigm of cyclic competition in spatially extended ecological systems, as represented by the classic Reichenbach-Mobilia-Frey (RMF) model, is that high mobility tends to hamper or even exclude species coexistence. This result was obtained under the hypothesis that individuals move randomly without taking into account the suitability of their local environment. We incorporate local habitat suitability into the RMF model and investigate its effect on coexistence. In particular, we hypothesize the use of "basic instinct" of an individual to determine its movement at any time step. That is, an individual is more likely to move when the local habitat becomes hostile and is no longer favorable for survival and growth. We show that, when such local habitat suitability is taken into account, robust coexistence can emerge even in the high-mobility regime where extinction is certain in the RMF model. A surprising finding is that coexistence is accompanied by the occurrence of substantial empty space in the system. Reexamination of the RMF model confirms the necessity and the important role of empty space in coexistence. Our study implies that adaptation/movements according to local habitat suitability are a fundamental factor to promote species coexistence and, consequently, biodiversity.
An Integrated Approach to Indoor and Outdoor Localization
2017-04-17
localization estimate, followed by particle filter based tracking. Initial localization is performed using WiFi and image observations. For tracking we...source. A two-step process is proposed that performs an initial localization es-timate, followed by particle filter based t racking. Initial...mapped, it is possible to use them for localization [20, 21, 22]. Haverinen et al. show that these fields could be used with a particle filter to
NASA Astrophysics Data System (ADS)
Laurent, Valentin; Scaillet, Stéphane; Jolivet, Laurent; Augier, Romain
2017-04-01
The complex interplay between rheology, temperature and deformation profoundly influences how crustal-scale shear zones form and then evolve across a deforming lithosphere. Understanding early exhumation processes in subduction zones requires quantitative age constraints on the timing of strain localization within high-pressure shear zones. Using both the in situ laser ablation and conventional step-heating 40Ar/39Ar dating (on phengite single grains and populations) methods, this study aims at quantifying the duration of ductile deformation and the timing of strain localization within HP-LT shear zones of the Cycladic Blueschist Unit (CBU, Greece). The rate of this progressive strain localization is unknown, and in general, poorly known in similar geological contexts. Critical to retrieve realistic estimates of rates of strain localization during exhumation, dense 40Ar/39Ar age transects were sampled along shear zones recently identified on Syros and Sifnos islands. There, field observations suggest that deformation progressively localized downward in the CBU during exhumation. In parallel, these shear zones are characterized by different degrees of retrogression from blueschist-facies to greenschist-facies P-T conditions overprinting eclogite-facies record throughout the CBU. Results show straightforward correlations between the degree of retrogression, the finite strain intensity and 40Ar/39Ar ages; the most ductilely deformed and retrograded rocks yielded the youngest 40Ar/39Ar ages. The possible effects of strain localization during exhumation on the record of the argon isotopic system in HP-LT shear zones are addressed. Our results show that strain has localized in shear zones over a 30 Ma long period and that individual shear zones evolve during 7-15 Ma. We also discuss these results at small-scale to see whether deformation and fluid circulations, channelled within shear bands, can homogenize chemical compositions and reset the 40Ar/39Ar isotopic record. This study brings new perspective on the process of strain localization through the dating of structures along strain gradients, especially on possible variation of rates of localisation through the entire exhumation history.
Computing the Partition Function for Kinetically Trapped RNA Secondary Structures
Lorenz, William A.; Clote, Peter
2011-01-01
An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server and source code available at http://bioinformatics.bc.edu/clotelab/RNAlocopt/. PMID:21297972
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, Sallie E.
2015-06-30
In 2009, the Illinois State Geological Survey (ISGS), in collaboration with the Midwest Geological Sequestration Consortium (MGSC), created a regional technology training center to disseminate carbon capture and sequestration (CCS) technology gained through leadership and participation in regional carbon sequestration projects. This technology training center was titled and branded as the Sequestration Training and Education Program (STEP). Over the last six years STEP has provided local, regional, national, and international education and training opportunities for engineers, geologists, service providers, regulators, executives, K-12 students, K-12 educators, undergraduate students, graduate students, university and community college faculty members, and participants of community programsmore » and functions, community organizations, and others. The goal for STEP educational programs has been on knowledge sharing and capacity building to stimulate economic recovery and development by training personnel for commercial CCS projects. STEP has worked with local, national and international professional organizations and regional experts to leverage existing training opportunities and provide stand-alone training. This report gives detailed information on STEP activities during the grant period (2009-2015).« less
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.
NASA Astrophysics Data System (ADS)
Kumar, Vandhna; Meyssignac, Benoit; Melet, Angélique; Ganachaud, Alexandre
2017-04-01
Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years is up to 3 times the global average. In this study, we attempt to reconstruct sea levels at selected sites in the region (Suva, Lautoka, Noumea - Fiji and New Caledonia) as a mutiple-linear regression of atmospheric and oceanic variables. We focus on interannual-to-decadal scale variability, and lower (including the global mean sea level rise) over the 1979-2014 period. Sea levels are taken from tide gauge records and the ORAS4 reanalysis dataset, and are expressed as a sum of steric and mass changes as a preliminary step. The key development in our methodology is using leading wind stress curl as a proxy for the thermosteric component. This is based on the knowledge that wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. The analysis is primarily based on correlation between local sea level and selected predictors, the dominant one being wind stress curl. In the first step, proxy boxes for wind stress curl are determined via regions of highest correlation. The proportion of sea level explained via linear regression is then removed, leaving a residual. This residual is then correlated with other locally acting potential predictors: halosteric sea level, the zonal and meridional wind stress components, and sea surface temperature. The statistically significant predictors are used in a multi-linear regression function to simulate the observed sea level. The method is able to reproduce between 40 to 80% of the variance in observed sea level. Based on the skill of the model, it has high potential in sea level projection and downscaling studies.
Energy Data Management Manual for the Wastewater Treatment Sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemar, Paul; De Fontaine, Andre
Energy efficiency has become a higher priority within the wastewater treatment sector, with facility operators and state and local governments ramping up efforts to reduce energy costs and improve environmental performance. Across the country, municipal wastewater treatment plants are estimated to consume more than 30 terawatt hours per year of electricity, which equates to about $2 billion in annual electric costs. Electricity alone can constitute 25% to 40% of a wastewater treatment plant’s annual operating budget and make up a significant portion of a given municipality’s total energy bill. These energy needs are expected to grow over time, driven bymore » population growth and increasingly stringent water quality requirements. The purpose of this document is to describe the benefits of energy data management, explain how it can help drive savings when linked to a strong energy management program, and provide clear, step-by-step guidance to wastewater treatment plants on how to appropriately track energy performance. It covers the basics of energy data management and related concepts and describes different options for key steps, recognizing that a single approach may not work for all agencies. Wherever possible, the document calls out simpler, less time-intensive approaches to help smaller plants with more limited resources measure and track energy performance. Reviews of key, publicly available energy-tracking tools are provided to help organizations select a tool that makes the most sense for them. Finally, this document describes additional steps wastewater treatment plant operators can take to build on their energy data management systems and further accelerate energy savings.« less
Endoclip Magnetic Resonance Imaging Screening: A Local Practice Review.
Accorsi, Fabio; Lalonde, Alain; Leswick, David A
2018-05-01
Not all endoscopically placed clips (endoclips) are magnetic resonance imaging (MRI) compatible. At many institutions, endoclip screening is part of the pre-MRI screening process. Our objective is to determine the contribution of each step of this endoclip screening protocol in determining a patient's endoclip status at our institution. A retrospective review of patients' endoscopic histories on general MRI screening forms for patients scanned during a 40-day period was performed to assess the percentage of patients that require endoclip screening at our institution. Following this, a prospective evaluation of 614 patients' endoclip screening determined the percentage of these patients ultimately exposed to each step in the protocol (exposure), and the percentage of patients whose endoclip status was determined with reasonable certainty by each step (determination). Exposure and determination values for each step were calculated as follows (exposure, determination): verbal interview (100%, 86%), review of past available imaging (14%, 36%), review of endoscopy report (9%, 57%), and new abdominal radiograph (4%, 96%), or CT (0.2%, 100%) for evaluation of potential endoclips. Only 1 patient did not receive MRI because of screening (in situ gastrointestinal endoclip identified). Verbal interview is invaluable to endoclip screening, clearing 86% of patients with minimal monetary and time investment. Conversely, the limited availability of endoscopy reports and relevant past imaging somewhat restricts the determination rates of these. New imaging (radiograph or computed tomography) is required <5% of the time, and although costly and associated with patient irradiation, has excellent determination rates (above 96%) when needed. Copyright © 2017 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Adam J.; Scherrer, Joseph R.; Reiserer, Ronald S., E-mail: ron.reiserer@vanderbilt.edu
We present a simple apparatus for improved surface modification of polydimethylsiloxane (PDMS) microfluidic devices. A single treatment chamber for plasma activation and chemical/physical vapor deposition steps minimizes the time-dependent degradation of surface activation that is inherent in multi-chamber techniques. Contamination and deposition irregularities are also minimized by conducting plasma activation and treatment phases in the same vacuum environment. An inductively coupled plasma driver allows for interchangeable treatment chambers. Atomic force microscopy confirms that silane deposition on PDMS gives much better surface quality than standard deposition methods, which yield a higher local roughness and pronounced irregularities in the surface.
Application of an unstructured grid flow solver to planes, trains and automobiles
NASA Technical Reports Server (NTRS)
Spragle, Gregory S.; Smith, Wayne A.; Yadlin, Yoram
1993-01-01
Rampant, an unstructured flow solver developed at Fluent Inc., is used to compute three-dimensional, viscous, turbulent, compressible flow fields within complex solution domains. Rampant is an explicit, finite-volume flow solver capable of computing flow fields using either triangular (2d) or tetrahedral (3d) unstructured grids. Local time stepping, implicit residual smoothing, and multigrid techniques are used to accelerate the convergence of the explicit scheme. The paper describes the Rampant flow solver and presents flow field solutions about a plane, train, and automobile.
Preconditioned upwind methods to solve 3-D incompressible Navier-Stokes equations for viscous flows
NASA Technical Reports Server (NTRS)
Hsu, C.-H.; Chen, Y.-M.; Liu, C. H.
1990-01-01
A computational method for calculating low-speed viscous flowfields is developed. The method uses the implicit upwind-relaxation finite-difference algorithm with a nonsingular eigensystem to solve the preconditioned, three-dimensional, incompressible Navier-Stokes equations in curvilinear coordinates. The technique of local time stepping is incorporated to accelerate the rate of convergence to a steady-state solution. An extensive study of optimizing the preconditioned system is carried out for two viscous flow problems. Computed results are compared with analytical solutions and experimental data.
Liu, Li; Liu, Mei-Hua; Deng, Lin-Lin; Lin, Bao-Ping; Yang, Hong
2017-08-23
In this Communication, we develop a two-step acyclic diene metathesis in situ polymerization/cross-linking method to synthesize uniaxially aligned main-chain liquid crystal elastomers with chemically bonded near-infrared absorbing four-alkenyl-tailed croconaine-core cross-linkers. Because of the extraordinary photothermal conversion property, such a soft actuator material can raise its local temperature from 18 to 260 °C in 8 s, and lift up burdens 5600 times heavier than its own weight, under 808 nm near-infrared irradiation.
Carluccio, Giuseppe; Bruno, Mary; Collins, Christopher M.
2015-01-01
Purpose Present a novel method for rapid prediction of temperature in vivo for a series of pulse sequences with differing levels and distributions of specific energy absorption rate (SAR). Methods After the temperature response to a brief period of heating is characterized, a rapid estimate of temperature during a series of periods at different heating levels is made using a linear heat equation and Impulse-Response (IR) concepts. Here the initial characterization and long-term prediction for a complete spine exam are made with the Pennes’ bioheat equation where, at first, core body temperature is allowed to increase and local perfusion is not. Then corrections through time allowing variation in local perfusion are introduced. Results The fast IR-based method predicted maximum temperature increase within 1% of that with a full finite difference simulation, but required less than 3.5% of the computation time. Even higher accelerations are possible depending on the time step size chosen, with loss in temporal resolution. Correction for temperature-dependent perfusion requires negligible additional time, and can be adjusted to be more or less conservative than the corresponding finite difference simulation. Conclusion With appropriate methods, it is possible to rapidly predict temperature increase throughout the body for actual MR examinations. (200/200 words) PMID:26096947
Carluccio, Giuseppe; Bruno, Mary; Collins, Christopher M
2016-05-01
Present a novel method for rapid prediction of temperature in vivo for a series of pulse sequences with differing levels and distributions of specific energy absorption rate (SAR). After the temperature response to a brief period of heating is characterized, a rapid estimate of temperature during a series of periods at different heating levels is made using a linear heat equation and impulse-response (IR) concepts. Here the initial characterization and long-term prediction for a complete spine exam are made with the Pennes' bioheat equation where, at first, core body temperature is allowed to increase and local perfusion is not. Then corrections through time allowing variation in local perfusion are introduced. The fast IR-based method predicted maximum temperature increase within 1% of that with a full finite difference simulation, but required less than 3.5% of the computation time. Even higher accelerations are possible depending on the time step size chosen, with loss in temporal resolution. Correction for temperature-dependent perfusion requires negligible additional time and can be adjusted to be more or less conservative than the corresponding finite difference simulation. With appropriate methods, it is possible to rapidly predict temperature increase throughout the body for actual MR examinations. © 2015 Wiley Periodicals, Inc.
Viking Afterbody Heating Computations and Comparisons to Flight Data
NASA Technical Reports Server (NTRS)
Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.
2006-01-01
Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/cm2 for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/cm2, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8- species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.
Viking Afterbody Heating Computations and Comparisons to Flight Data
NASA Technical Reports Server (NTRS)
Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.
2006-01-01
Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/sq cm for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/sq cm, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8-species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.
The effect of a novel minimally invasive strategy for infected necrotizing pancreatitis.
Tong, Zhihui; Shen, Xiao; Ke, Lu; Li, Gang; Zhou, Jing; Pan, Yiyuan; Li, Baiqiang; Yang, Dongliang; Li, Weiqin; Li, Jieshou
2017-11-01
Step-up approach consisting of multiple minimally invasive techniques has gradually become the mainstream for managing infected pancreatic necrosis (IPN). In the present study, we aimed to compare the safety and efficacy of a novel four-step approach and the conventional approach in managing IPN. According to the treatment strategy, consecutive patients fulfilling the inclusion criteria were put into two time intervals to conduct a before-and-after comparison: the conventional group (2010-2011) and the novel four-step group (2012-2013). The conventional group was essentially open necrosectomy for any patient who failed percutaneous drainage of infected necrosis. And the novel drainage approach consisted of four different steps including percutaneous drainage, negative pressure irrigation, endoscopic necrosectomy and open necrosectomy in sequence. The primary endpoint was major complications (new-onset organ failure, sepsis or local complications, etc.). Secondary endpoints included mortality during hospitalization, need of emergency surgery, duration of organ failure and sepsis, etc. Of the 229 recruited patients, 92 were treated with the conventional approach and the remaining 137 were managed with the novel four-step approach. New-onset major complications occurred in 72 patients (78.3%) in the two-step group and 75 patients (54.7%) in the four-step group (p < 0.001). For other important endpoints, although there was no statistical difference in mortality between the two groups (p = 0.403), significantly fewer patients in the four-step group required emergency surgery when compared with the conventional group [14.6% (20/137) vs. 45.6% (42/92), p < 0.001]. In addition, stratified analysis revealed that the four-step approach group presented significantly lower incidence of new-onset organ failure and other major complications in patients with the most severe type of AP. Comparing with the conventional approach, the novel four-step approach significantly reduced the rate of new-onset major complications and requirement of emergency operations in treating IPN, especially in those with the most severe type of acute pancreatitis.
Incubation behavior of silicon nanowire growth investigated by laser-assisted rapid heating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Sang-gil; Kim, Eunpa; Grigoropoulos, Costas P., E-mail: cgrigoro@berkeley.edu
2016-08-15
We investigate the early stage of silicon nanowire growth by the vapor-liquid-solid mechanism using laser-localized heating combined with ex-situ chemical mapping analysis by energy-filtered transmission electron microscopy. By achieving fast heating and cooling times, we can precisely determine the nucleation times for nanowire growth. We find that the silicon nanowire nucleation process occurs on a time scale of ∼10 ms, i.e., orders of magnitude faster than the times reported in investigations using furnace processes. The rate-limiting step for silicon nanowire growth at temperatures in the vicinity of the eutectic temperature is found to be the gas reaction and/or the silicon crystalmore » growth process, whereas at higher temperatures it is the rate of silicon diffusion through the molten catalyst that dictates the nucleation kinetics.« less
Location detection and tracking of moving targets by a 2D IR-UWB radar system.
Nguyen, Van-Han; Pyun, Jae-Young
2015-03-19
In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking.
A two-step FEM-SEM approach for wave propagation analysis in cable structures
NASA Astrophysics Data System (ADS)
Zhang, Songhan; Shen, Ruili; Wang, Tao; De Roeck, Guido; Lombaert, Geert
2018-02-01
Vibration-based methods are among the most widely studied in structural health monitoring (SHM). It is well known, however, that the low-order modes, characterizing the global dynamic behaviour of structures, are relatively insensitive to local damage. Such local damage may be easier to detect by methods based on wave propagation which involve local high frequency behaviour. The present work considers the numerical analysis of wave propagation in cables. A two-step approach is proposed which allows taking into account the cable sag and the distribution of the axial forces in the wave propagation analysis. In the first step, the static deformation and internal forces are obtained by the finite element method (FEM), taking into account geometric nonlinear effects. In the second step, the results from the static analysis are used to define the initial state of the dynamic analysis which is performed by means of the spectral element method (SEM). The use of the SEM in the second step of the analysis allows for a significant reduction in computational costs as compared to a FE analysis. This methodology is first verified by means of a full FE analysis for a single stretched cable. Next, simulations are made to study the effects of damage in a single stretched cable and a cable-supported truss. The results of the simulations show how damage significantly affects the high frequency response, confirming the potential of wave propagation based methods for SHM.
Mikesell, T. Dylan; Malcolm, Alison E.; Yang, Di; Haney, Matthew M.
2015-01-01
Time-shift estimation between arrivals in two seismic traces before and after a velocity perturbation is a crucial step in many seismic methods. The accuracy of the estimated velocity perturbation location and amplitude depend on this time shift. Windowed cross correlation and trace stretching are two techniques commonly used to estimate local time shifts in seismic signals. In the work presented here, we implement Dynamic Time Warping (DTW) to estimate the warping function – a vector of local time shifts that globally minimizes the misfit between two seismic traces. We illustrate the differences of all three methods compared to one another using acoustic numerical experiments. We show that DTW is comparable to or better than the other two methods when the velocity perturbation is homogeneous and the signal-to-noise ratio is high. When the signal-to-noise ratio is low, we find that DTW and windowed cross correlation are more accurate than the stretching method. Finally, we show that the DTW algorithm has better time resolution when identifying small differences in the seismic traces for a model with an isolated velocity perturbation. These results impact current methods that utilize not only time shifts between (multiply) scattered waves, but also amplitude and decoherence measurements. DTW is a new tool that may find new applications in seismology and other geophysical methods (e.g., as a waveform inversion misfit function).
The Origins of Order: Self-Organization and Selection in Evolution
NASA Astrophysics Data System (ADS)
Kauffman, Stuart A.
The following sections are included: * Introduction * Fitness Landscapes in Sequence Space * The NK Model of Rugged Fitness Landscapes * The NK Model of Random Epistatic Interactions * The Rank Order Statistics on K = N - 1 Random Landscapes * The number of local optima is very large * The expected fraction of fitter 1-mutant neighbors dwindles by 1/2 on each improvement step * Walks to local optima are short and vary as a logarithmic function of N * The expected time to reach an optimum is proportional to the dimensionality of the space * The ratio of accepted to tried mutations scales as lnN/N * Any genotype can only climb to a small fraction of the local optima * A small fraction of the genotypes can climb to any one optimum * Conflicting constraints cause a "complexity catastrophe": as complexity increase accessible adaptive peaks fall toward the mean fitness * The "Tunable" NK Family of Correlated Landscapes * Other Combinatorial Optimization Problems and Their Landscapes * Summary * References
Cancerous tumor: the high frequency of a rare event.
Galam, S; Radomski, J P
2001-05-01
A simple model for cancer growth is presented using cellular automata. Cells diffuse randomly on a two-dimensional square lattice. Individual cells can turn cancerous at a very low rate. During each diffusive step, local fights may occur between healthy and cancerous cells. Associated outcomes depend on some biased local rules, which are independent of the overall cancerous cell density. The models unique ingredients are the frequency of local fights and the bias amplitude. While each isolated cancerous cell is eventually destroyed, an initial two-cell tumor cluster is found to have a nonzero probabilty to spread over the whole system. The associated phase diagram for survival or death is obtained as a function of both the rate of fight and the bias distribution. Within the model, although the occurrence of a killing cluster is a very rare event, it turns out to happen almost systematically over long periods of time, e.g., on the order of an adults life span. Thus, after some age, survival from tumorous cancer becomes random.
Tourovskaia, Anna; Kosar, T Fettah; Folch, Albert
2006-03-15
During neuromuscular synaptogenesis, the exchange of spatially localized signals between nerve and muscle initiates the coordinated focal accumulation of the acetylcholine (ACh) release machinery and the ACh receptors (AChRs). One of the key first steps is the release of the proteoglycan agrin focalized at the axon tip, which induces the clustering of AChRs on the postsynaptic membrane at the neuromuscular junction. The lack of a suitable method for focal application of agrin in myotube cultures has limited the majority of in vitro studies to the application of agrin baths. We used a microfluidic device and surface microengineering to focally stimulate muscle cells with agrin at a small portion of their membrane and at a time and position chosen by the user. The device is used to verify the hypothesis that focal application of agrin to the muscle cell membrane induces local aggregation of AChRs in differentiated C2C12 myotubes.
A novel cost-effective parallel narrowband ANC system with local secondary-path estimation
NASA Astrophysics Data System (ADS)
Delegà, Riccardo; Bernasconi, Giancarlo; Piroddi, Luigi
2017-08-01
Many noise reduction applications are targeted at multi-tonal disturbances. Active noise control (ANC) solutions for such problems are generally based on the combination of multiple adaptive notch filters. Both the performance and the computational cost are negatively affected by an increase in the number of controlled frequencies. In this work we study a different modeling approach for the secondary path, based on the estimation of various small local models in adjacent frequency subbands, that greatly reduces the impact of reference-filtering operations in the ANC algorithm. Furthermore, in combination with a frequency-specific step size tuning method it provides a balanced attenuation performance over the whole controlled frequency range (and particularly in the high end of the range). Finally, the use of small local models is greatly beneficial for the reactivity of the online secondary path modeling algorithm when the characteristics of the acoustic channels are time-varying. Several simulations are provided to illustrate the positive features of the proposed method compared to other well-known techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perry, William L; Gunderson, Jake A; Dickson, Peter M
There has been a long history of interest in the decomposition kinetics of HMX and HMX-based formulations due to the widespread use of this explosive in high performance systems. The kinetics allow us to predict, or attempt to predict, the behavior of the explosive when subjected to thermal hazard scenarios that lead to ignition via impact, spark, friction or external heat. The latter, commonly referred to as 'cook off', has been widely studied and contemporary kinetic and transport models accurately predict time and location of ignition for simple geometries. However, there has been relatively little attention given to the problemmore » of localized ignition that results from the first three ignition sources of impact, spark and friction. The use of a zero-order single-rate expression describing the exothermic decomposition of explosives dates to the early work of Frank-Kamanetskii in the late 1930s and continued through the 60's and 70's. This expression provides very general qualitative insight, but cannot provide accurate spatial or timing details of slow cook off ignition. In the 70s, Catalano, et al., noted that single step kinetics would not accurately predict time to ignition in the one-dimensional time to explosion apparatus (ODTX). In the early 80s, Tarver and McGuire published their well-known three step kinetic expression that included an endothermic decomposition step. This scheme significantly improved the accuracy of ignition time prediction for the ODTX. However, the Tarver/McGuire model could not produce the internal temperature profiles observed in the small-scale radial experiments nor could it accurately predict the location of ignition. Those factors are suspected to significantly affect the post-ignition behavior and better models were needed. Brill, et al. noted that the enthalpy change due to the beta-delta crystal phase transition was similar to the assumed endothermic decomposition step in the Tarver/McGuire model. Henson, et al., deduced the kinetics and thermodynamics of the phase transition, providing Dickson, et al. with the information necessary to develop a four-step model that included a two-step nucleation and growth mechanism for the {beta}-{delta} phase transition. Initially, an irreversible scheme was proposed. That model accurately predicted the spatial and temporal cook off behavior of the small-scale radial experiment under slow heating conditions, but did not accurately capture the endothermic phase transition at a faster heating rate. The current version of the four-step model includes reversibility and accurately describes the small-scale radial experiment over a wide range of heating rates. We have observed impact-induced friction ignition of PBX 9501 with grit embedded between the explosive and the lower anvil surface. Observation was done using an infrared camera looking through the sapphire bottom anvil. Time to ignition and temperature-time behavior were recorded. The time to ignition was approximately 500 microseconds and the temperature was approximately 1000 K. The four step reversible kinetic scheme was previously validated for slow cook off scenarios. Our intention was to test the validity for significantly faster hot-spot processes, such as the impact-induced grit friction process studied here. We found the model predicted the ignition time within experimental error. There are caveats to consider when evaluating the agreement. The primary input to the model was friction work over an area computed by a stress analysis. The work rate itself, and the relative velocity of the grit and substrate both have a strong dependence on the initial position of the grit. Any errors in the analysis or the initial grit position would affect the model results. At this time, we do not know the sensitivity to these issues. However, the good agreement does suggest the four step kinetic scheme may have universal applicability for HMX systems.« less
Rapid kinetics of endocytosis at rod photoreceptor synapses depends upon endocytic load and calcium.
Cork, Karlene M; Thoreson, Wallace B
2014-05-01
Release from rods is triggered by the opening of L-type Ca2+ channels that lie beneath synaptic ribbons. After exocytosis, vesicles are retrieved by compensatory endocytosis. Previous work showed that endocytosis is dynamin-dependent in rods but dynamin-independent in cones. We hypothesized that fast endocytosis in rods may also differ from cones in its dependence upon the amount of Ca2+ influx and/or endocytic load. We measured exocytosis and endocytosis from membrane capacitance (C m) changes evoked by depolarizing steps in voltage clamped rods from tiger salamander retinal slices. Similar to cones, the time constant for endocytosis in rods was quite fast, averaging <200 ms. We manipulated Ca2+ influx and the amount of vesicle release by altering the duration and voltage of depolarizing steps. Unlike cones, endocytosis kinetics in rods slowed after increasing Ca2+ channel activation with longer step durations or more strongly depolarized voltage steps. Endocytosis kinetics also slowed as Ca2+ buffering was decreased by replacing BAPTA (10 or 1 mM) with the slower Ca2+ buffer EGTA (5 or 0.5 mM) in the pipette solution. These data provide further evidence that endocytosis mechanisms differ in rods and cones and suggest that endocytosis in rods is regulated by both endocytic load and local Ca2+ levels.
NASA Astrophysics Data System (ADS)
Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir
2018-03-01
Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.
Rapid kinetics of endocytosis at rod photoreceptor synapses depends upon endocytic load and calcium
CORK, KARLENE M.; THORESON, WALLACE B.
2015-01-01
Release from rods is triggered by the opening of L-type Ca2+ channels that lie beneath synaptic ribbons. After exocytosis, vesicles are retrieved by compensatory endocytosis. Previous work showed that endocytosis is dynamin-dependent in rods but dynamin-independent in cones. We hypothesized that fast endocytosis in rods may also differ from cones in its dependence upon the amount of Ca2+ influx and/or endocytic load. We measured exocytosis and endocytosis from membrane capacitance (Cm) changes evoked by depolarizing steps in voltage clamped rods from tiger salamander retinal slices. Similar to cones, the time constant for endocytosis in rods was quite fast, averaging <200 ms. We manipulated Ca2+ influx and the amount of vesicle release by altering the duration and voltage of depolarizing steps. Unlike cones, endocytosis kinetics in rods slowed after increasing Ca2+ channel activation with longer step durations or more strongly depolarized voltage steps. Endocytosis kinetics also slowed as Ca2+ buffering was decreased by replacing BAPTA (10 or 1 mM) with the slower Ca2+ buffer EGTA (5 or 0.5 mM) in the pipette solution. These data provide further evidence that endocytosis mechanisms differ in rods and cones and suggest that endocytosis in rods is regulated by both endocytic load and local Ca2+ levels. PMID:24735554
Dynamic Black-Level Correction and Artifact Flagging for Kepler Pixel Time Series
NASA Technical Reports Server (NTRS)
Kolodziejczak, J. J.; Clarke, B. D.; Caldwell, D. A.
2011-01-01
Methods applied to the calibration stage of Kepler pipeline data processing [1] (CAL) do not currently use all of the information available to identify and correct several instrument-induced artifacts. These include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, and manifestations of drifting moire pattern as locally correlated nonstationary noise, and rolling bands in the images which find their way into the time series [2], [3]. As the Kepler Mission continues to improve the fidelity of its science data products, we are evaluating the benefits of adding pipeline steps to more completely model and dynamically correct the FGS crosstalk, then use the residuals from these model fits to detect and flag spatial regions and time intervals of strong time-varying black-level which may complicate later processing or lead to misinterpretation of instrument behavior as stellar activity.
Discrete transparent boundary conditions for the mixed KDV-BBM equation
NASA Astrophysics Data System (ADS)
Besse, Christophe; Noble, Pascal; Sanchez, David
2017-09-01
In this paper, we consider artificial boundary conditions for the linearized mixed Korteweg-de Vries (KDV) and Benjamin-Bona-Mahoney (BBM) equation which models water waves in the small amplitude, large wavelength regime. Continuous (respectively discrete) artificial boundary conditions involve non local operators in time which in turn requires to compute time convolutions and invert the Laplace transform of an analytic function (respectively the Z-transform of an holomorphic function). In this paper, we propose a new, stable and fairly general strategy to carry out this crucial step in the design of transparent boundary conditions. For large time simulations, we also introduce a methodology based on the asymptotic expansion of coefficients involved in exact direct transparent boundary conditions. We illustrate the accuracy of our methods for Gaussian and wave packets initial data.
NASA Astrophysics Data System (ADS)
Igumnov, Leonid; Ipatov, Aleksandr; Belov, Aleksandr; Petrov, Andrey
2015-09-01
The report presents the development of the time-boundary element methodology and a description of the related software based on a stepped method of numerical inversion of the integral Laplace transform in combination with a family of Runge-Kutta methods for analyzing 3-D mixed initial boundary-value problems of the dynamics of inhomogeneous elastic and poro-elastic bodies. The results of the numerical investigation are presented. The investigation methodology is based on direct-approach boundary integral equations of 3-D isotropic linear theories of elasticity and poroelasticity in Laplace transforms. Poroelastic media are described using Biot models with four and five base functions. With the help of the boundary-element method, solutions in time are obtained, using the stepped method of numerically inverting Laplace transform on the nodes of Runge-Kutta methods. The boundary-element method is used in combination with the collocation method, local element-by-element approximation based on the matched interpolation model. The results of analyzing wave problems of the effect of a non-stationary force on elastic and poroelastic finite bodies, a poroelastic half-space (also with a fictitious boundary) and a layered half-space weakened by a cavity, and a half-space with a trench are presented. Excitation of a slow wave in a poroelastic medium is studied, using the stepped BEM-scheme on the nodes of Runge-Kutta methods.
Position-dependent effects of polylysine on Sec protein transport.
Liang, Fu-Cheng; Bageshwar, Umesh K; Musser, Siegfried M
2012-04-13
The bacterial Sec protein translocation system catalyzes the transport of unfolded precursor proteins across the cytoplasmic membrane. Using a recently developed real time fluorescence-based transport assay, the effects of the number and distribution of positive charges on the transport time and transport efficiency of proOmpA were examined. As expected, an increase in the number of lysine residues generally increased transport time and decreased transport efficiency. However, the observed effects were highly dependent on the polylysine position in the mature domain. In addition, a string of consecutive positive charges generally had a more significant effect on transport time and efficiency than separating the charges into two or more charged segments. Thirty positive charges distributed throughout the mature domain resulted in effects similar to 10 consecutive charges near the N terminus of the mature domain. These data support a model in which the local effects of positive charge on the translocation kinetics dominate over total thermodynamic constraints. The rapid translocation kinetics of some highly charged proOmpA mutants suggest that the charge is partially shielded from the electric field gradient during transport, possibly by the co-migration of counter ions. The transport times of precursors with multiple positively charged sequences, or "pause sites," were fairly well predicted by a local effect model. However, the kinetic profile predicted by this local effect model was not observed. Instead, the transport kinetics observed for precursors with multiple polylysine segments support a model in which translocation through the SecYEG pore is not the rate-limiting step of transport.
Position-dependent Effects of Polylysine on Sec Protein Transport*
Liang, Fu-Cheng; Bageshwar, Umesh K.; Musser, Siegfried M.
2012-01-01
The bacterial Sec protein translocation system catalyzes the transport of unfolded precursor proteins across the cytoplasmic membrane. Using a recently developed real time fluorescence-based transport assay, the effects of the number and distribution of positive charges on the transport time and transport efficiency of proOmpA were examined. As expected, an increase in the number of lysine residues generally increased transport time and decreased transport efficiency. However, the observed effects were highly dependent on the polylysine position in the mature domain. In addition, a string of consecutive positive charges generally had a more significant effect on transport time and efficiency than separating the charges into two or more charged segments. Thirty positive charges distributed throughout the mature domain resulted in effects similar to 10 consecutive charges near the N terminus of the mature domain. These data support a model in which the local effects of positive charge on the translocation kinetics dominate over total thermodynamic constraints. The rapid translocation kinetics of some highly charged proOmpA mutants suggest that the charge is partially shielded from the electric field gradient during transport, possibly by the co-migration of counter ions. The transport times of precursors with multiple positively charged sequences, or “pause sites,” were fairly well predicted by a local effect model. However, the kinetic profile predicted by this local effect model was not observed. Instead, the transport kinetics observed for precursors with multiple polylysine segments support a model in which translocation through the SecYEG pore is not the rate-limiting step of transport. PMID:22367204
Recognizing characters of ancient manuscripts
NASA Astrophysics Data System (ADS)
Diem, Markus; Sablatnig, Robert
2010-02-01
Considering printed Latin text, the main issues of Optical Character Recognition (OCR) systems are solved. However, for degraded handwritten document images, basic preprocessing steps such as binarization, gain poor results with state-of-the-art methods. In this paper ancient Slavonic manuscripts from the 11th century are investigated. In order to minimize the consequences of false character segmentation, a binarization-free approach based on local descriptors is proposed. Additionally local information allows the recognition of partially visible or washed out characters. The proposed algorithm consists of two steps: character classification and character localization. Initially Scale Invariant Feature Transform (SIFT) features are extracted which are subsequently classified using Support Vector Machines (SVM). Afterwards, the interest points are clustered according to their spatial information. Thereby, characters are localized and finally recognized based on a weighted voting scheme of pre-classified local descriptors. Preliminary results show that the proposed system can handle highly degraded manuscript images with background clutter (e.g. stains, tears) and faded out characters.
Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P; Zijdenbos, Alex P; Evans, Alan C
2012-01-01
The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources.
Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P.; Zijdenbos, Alex P.; Evans, Alan C.
2012-01-01
The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources. PMID:22493575
Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina
2017-05-01
Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.
A data variance technique for automated despiking of magnetotelluric data with a remote reference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kappler, K.
2011-02-15
The magnetotelluric method employs co-located surface measurements of electric and magnetic fields to infer the local electrical structure of the earth. The frequency-dependent 'apparent resistivity' curves can be inaccurate at long periods if input data are contaminated - even when robust remote reference techniques are employed. Data despiking prior to processing can result in significantly more reliable estimates of long period apparent resistivities. This paper outlines a two-step method of automatic identification and replacement for spike-like contamination of magnetotelluric data; based on the simultaneity of natural electric and magnetic field variations at distant sites. This simultaneity is exploited both tomore » identify windows in time when the array data are compromised, and to generate synthetic data that replace observed transient noise spikes. In the first step, windows in data time series containing spikes are identified via intersite comparison of channel 'activity' - such as the variance of differenced data within each window. In the second step, plausible data for replacement of flagged windows is calculated by Wiener filtering coincident data in clean channels. The Wiener filters - which express the time-domain relationship between various array channels - are computed using an uncontaminated segment of array training data. Examples are shown where the algorithm is applied to artificially contaminated data, and to real field data. In both cases all spikes are successfully identified. In the case of implanted artificial noise, the synthetic replacement time series are very similar to the original recording. In all cases, apparent resistivity and phase curves obtained by processing the despiked data are much improved over curves obtained from raw data.« less
On the wing behaviour of the overtones of self-localized modes
NASA Astrophysics Data System (ADS)
Dusi, R.; Wagner, M.
1998-08-01
In this paper the solutions for self-localized modes in a nonlinear chain are investigated. We present a converging iteration procedure, which is based on analytical information of the wings and which takes into account higher overtones of the solitonic oscillations. The accuracy is controlled in a step by step manner by means of a Gaussian error analysis. Our numerical procedure allows for highly accurate solutions, in all anharmonicity regimes, and beyond the rotating-wave approximation (RWA). It is found that the overtone wings change their analytical behaviour at certain critical values of the energy of the self-localized mode: there is a turnover in the exponent of descent. The results are shown for a Fermi-Pasta-Ulam (FPU) chain with quartic anharmonicity.
Stability of spanwise-modulated flows behind backward-facing steps
NASA Astrophysics Data System (ADS)
Boiko, A. V.; Dovgal, A. V.; Sorokin, A. M.
2017-10-01
An overview and synthesis of researches on development of local vortical disturbances in laminar separated flows downstream of backward-facing steps, in which the velocity field depends essentially on two variables are given. Peculiarities of transition to turbulence in such spatially inhomogeneous separated zones are discussed. The experimental data are supplemented by the linear stability characteristics of model velocity profiles of the separated flow computed using both the classical local formulation and the nonlocal approach based on the Floquet theory for partial differential equations with periodic coefficients. The results clarify the response of the local separated flows to their modulation with stationary geometrical and temperature inhomogeneities. The results can be useful for the development of new methods of laminar separation control.
A distributed model predictive control scheme for leader-follower multi-agent systems
NASA Astrophysics Data System (ADS)
Franzè, Giuseppe; Lucia, Walter; Tedesco, Francesco
2018-02-01
In this paper, we present a novel receding horizon control scheme for solving the formation problem of leader-follower configurations. The algorithm is based on set-theoretic ideas and is tuned for agents described by linear time-invariant (LTI) systems subject to input and state constraints. The novelty of the proposed framework relies on the capability to jointly use sequences of one-step controllable sets and polyhedral piecewise state-space partitions in order to online apply the 'better' control action in a distributed receding horizon fashion. Moreover, we prove that the design of both robust positively invariant sets and one-step-ahead controllable regions is achieved in a distributed sense. Simulations and numerical comparisons with respect to centralised and local-based strategies are finally performed on a group of mobile robots to demonstrate the effectiveness of the proposed control strategy.
NASA Astrophysics Data System (ADS)
Rowland, David J.; Biteen, Julie S.
2017-04-01
Single-molecule super-resolution imaging and tracking can measure molecular motions inside living cells on the scale of the molecules themselves. Diffusion in biological systems commonly exhibits multiple modes of motion, which can be effectively quantified by fitting the cumulative probability distribution of the squared step sizes in a two-step fitting process. Here we combine this two-step fit into a single least-squares minimization; this new method vastly reduces the total number of fitting parameters and increases the precision with which diffusion may be measured. We demonstrate this Global Fit approach on a simulated two-component system as well as on a mixture of diffusing 80 nm and 200 nm gold spheres to show improvements in fitting robustness and localization precision compared to the traditional Local Fit algorithm.
NASA Astrophysics Data System (ADS)
Forsythe, N. D.; Fowler, H. J.
2017-12-01
The "Climate-smart agriculture implementation through community-focused pursuit of land and water productivity in South Asia" (CSAICLAWPS) project is a research initiative funded by the (UK) Royal Society through its Challenge Grants programme which is part of the broader UK Global Challenges Research Fund (GCRF). CSAICLAWPS has three objectives: a) development of "added-value" - bias assessed, statistically down-scaled - climate projections for selected case study sites across South Asia; b) investigation of crop failure modes under both present (observed) and future (projected) conditions; and c) facilitation of developing local adaptive capacity and resilience through stakeholder engagement. At AGU we will be presenting both next steps and progress to date toward these three objectives: [A] We have carried out bias assessments of a substantial multi-model RCM ensemble (MME) from the CORDEX South Asia (CORDEXdomain for case studies in three countries - Pakistan, India and Sri Lanka - and (stochastically) produced synthetic time-series for these sites from local observations using a Python-based implementation of the principles underlying the Climate Research Unit Weather Generator (CRU-WG) in order to enable probabilistic simulation of current crop yields. [B] We have characterised present response of local crop yields to climate variability in key case study sites using AquaCrop simulations parameterised based on input (agronomic practices, soil conditions, etc) from smallholder farmers. [C] We have implemented community-based hydro-climatological monitoring in several case study "revenue villages" (panchayats) in the Nainital District of Uttarakhand. The purpose of this is not only to increase availability of meteorological data, but also has the aspiration of, over time, leading to enhanced quantitative awareness of present climate variability and potential future conditions (as projected by RCMs). Next steps in our work will include: 1) future crop yield simulations driven by "perturbation" of synthetic time-series using "change factors from the CORDEX-SA MME; 2) stakeholder dialogues critically evaluating potential strategies at the grassroots (implementation) level to mitigate impacts of climate variability and change on crop yields.
A guide for local agency pavement managers
DOT National Transportation Integrated Search
1994-12-01
The purpose of this guide is to provide Washington's local agencies with a practical document that will assist pavement managers in understanding the pavement management process and the steps necessary to implement their own pavement management syste...
Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.
Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron
2017-10-21
The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.
Intelligent visual localization of wireless capsule endoscopes enhanced by color information.
Dimas, George; Spyrou, Evaggelos; Iakovidis, Dimitris K; Koulaouzidis, Anastasios
2017-10-01
Wireless capsule endoscopy (WCE) is performed with a miniature swallowable endoscope enabling the visualization of the whole gastrointestinal (GI) tract. One of the most challenging problems in WCE is the localization of the capsule endoscope (CE) within the GI lumen. Contemporary, radiation-free localization approaches are mainly based on the use of external sensors and transit time estimation techniques, with practically low localization accuracy. Latest advances for the solution of this problem include localization approaches based solely on visual information from the CE camera. In this paper we present a novel visual localization approach based on an intelligent, artificial neural network, architecture which implements a generic visual odometry (VO) framework capable of estimating the motion of the CE in physical units. Unlike the conventional, geometric, VO approaches, the proposed one is adaptive to the geometric model of the CE used; therefore, it does not require any prior knowledge about and its intrinsic parameters. Furthermore, it exploits color as a cue to increase localization accuracy and robustness. Experiments were performed using a robotic-assisted setup providing ground truth information about the actual location of the CE. The lowest average localization error achieved is 2.70 ± 1.62 cm, which is significantly lower than the error obtained with the geometric approach. This result constitutes a promising step towards the in-vivo application of VO, which will open new horizons for accurate local treatment, including drug infusion and surgical interventions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Strategy Guideline. Proper Water Heater Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoeschele, M.; Springer, D.; German, A.
2015-04-09
This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.
Strategy Guideline: Proper Water Heater Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoeschele, M.; Springer, D.; German, A.
2015-04-01
This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.
The Tunneling Method for Global Optimization in Multidimensional Scaling.
ERIC Educational Resources Information Center
Groenen, Patrick J. F.; Heiser, Willem J.
1996-01-01
A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)
ERIC Educational Resources Information Center
Busch, Phyllis S.
1985-01-01
Provides directions for basic science experiments which demonstrate the rain cycle, fundamentals of cloud formation, and testing for the presence of acidity in local rainwater. Describes materials required, step-by-step instructions, and discussion topics. (NEC)
Fractional Steps methods for transient problems on commodity computer architectures
NASA Astrophysics Data System (ADS)
Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.
2008-12-01
Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.
Knowledge-based control for robot self-localization
NASA Technical Reports Server (NTRS)
Bennett, Bonnie Kathleen Holte
1993-01-01
Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.
Mass spectrometry imaging: Towards a lipid microscope?
Touboul, David; Brunelle, Alain; Laprévote, Olivier
2011-01-01
Biological imaging techniques are the most efficient way to locally measure the variation of different parameters on tissue sections. These analyses are gaining increasing interest since 20 years and allow observing extremely complex biological phenomena at lower and lower time and resolution scale. Nevertheless, most of them only target very few compounds of interest, which are chosen a priori, due to their low resolution power and sensitivity. New chemical imaging technique has to be introduced in order to overcome these limitations, leading to more informative and sensitive analyses for biologists and physicians. Two major mass spectrometry methods can be efficiently used to generate the distribution of biological compounds over a tissue section. Matrix-Assisted Laser Desorption/Ionisation-Mass Spectrometry (MALDI-MS) needs the co-crystallization of the sample with a matrix before to be irradiated by a laser, whereas the analyte is directly desorbed by a primary ion bombardment for Secondary Ion Mass Spectrometry (SIMS) experiments. In both cases, energy used for desorption/ionization is locally deposited -some tens of microns for the laser and some hundreds of nanometers for the ion beam- meaning that small areas over the surface sample can be separately analyzed. Step by step analysis allows spectrum acquisitions over the tissue sections and the data are treated by modern informatics software in order to create ion density maps, i.e., the intensity plot of one specific ion versus the (x,y) position. Main advantages of SIMS and MALDI compared to other chemical imaging techniques lie in the simultaneous acquisition of a large number of biological compounds in mixture with an excellent sensitivity obtained by Time-of-Flight (ToF) mass analyzer. Moreover, data treatment is done a posteriori, due to the fact that no compound is selectively marked, and let us access to the localization of different lipid classes in only one complete acquisition. Copyright © 2010 Elsevier Masson SAS. All rights reserved.
Ultrafast optical technique for the characterization of altered materials
Maris, H.J.
1998-01-06
Disclosed herein is a method and a system for non-destructively examining a semiconductor sample having at least one localized region underlying a surface through into which a selected chemical species has been implanted or diffused. A first step induces at least one transient time-varying change in optical constants of the sample at a location at or near to a surface of the sample. A second step measures a response of the sample to an optical probe beam, either pulsed or continuous wave, at least during a time that the optical constants are varying. A third step associates the measured response with at least one of chemical species concentration, chemical species type, implant energy, a presence or absence of an introduced chemical species region at the location, and a presence or absence of implant-related damage. The method and apparatus in accordance with this invention can be employed in conjunction with a measurement of one or more of the following effects arising from a time-dependent change in the optical constants of the sample due to the application of at least one pump pulse: (a) a change in reflected intensity; (b) a change in transmitted intensity; (c) a change in a polarization state of the reflected and/or transmitted light; (d) a change in the optical phase of the reflected and/or transmitted light; (e) a change in direction of the reflected and/or transmitted light; and (f) a change in optical path length between the sample`s surface and a detector. 22 figs.
Solving the chemical master equation using sliding windows
2010-01-01
Background The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species. Results In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy. Conclusions The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori. PMID:20377904
Ultrafast optical technique for the characterization of altered materials
Maris, Humphrey J.
1998-01-01
Disclosed herein is a method and a system for non-destructively examining a semiconductor sample (30) having at least one localized region underlying a surface (30a) through into which a selected chemical species has been implanted or diffused. A first step induces at least one transient time-varying change in optical constants of the sample at a location at or near to a surface of the sample. A second step measures a response of the sample to an optical probe beam, either pulsed or continuous wave, at least during a time that the optical constants are varying. A third step associates the measured response with at least one of chemical species concentration, chemical species type, implant energy, a presence or absence of an introduced chemical species region at the location, and a presence or absence of implant-related damage. The method and apparatus in accordance with this invention can be employed in conjunction with a measurement of one or more of the following effects arising from a time-dependent change in the optical constants of the sample due to the application of at least one pump pulse: (a) a change in reflected intensity; (b) a change in transmitted intensity; (c) a change in a polarization state of the reflected and/or transmitted light; (d) a change in the optical phase of the reflected and/or transmitted light; (e) a change in direction of the reflected and/or transmitted light; and (f) a change in optical path length between the sample's surface and a detector.
Modular GIS Framework for National Scale Hydrologic and Hydraulic Modeling Support
NASA Astrophysics Data System (ADS)
Djokic, D.; Noman, N.; Kopp, S.
2015-12-01
Geographic information systems (GIS) have been extensively used for pre- and post-processing of hydrologic and hydraulic models at multiple scales. An extensible GIS-based framework was developed for characterization of drainage systems (stream networks, catchments, floodplain characteristics) and model integration. The framework is implemented as a set of free, open source, Python tools and builds on core ArcGIS functionality and uses geoprocessing capabilities to ensure extensibility. Utilization of COTS GIS core capabilities allows immediate use of model results in a variety of existing online applications and integration with other data sources and applications.The poster presents the use of this framework to downscale global hydrologic models to local hydraulic scale and post process the hydraulic modeling results and generate floodplains at any local resolution. Flow forecasts from ECMWF or WRF-Hydro are downscaled and combined with other ancillary data for input into the RAPID flood routing model. RAPID model results (stream flow along each reach) are ingested into a GIS-based scale dependent stream network database for efficient flow utilization and visualization over space and time. Once the flows are known at localized reaches, the tools can be used to derive the floodplain depth and extent for each time step in the forecast at any available local resolution. If existing rating curves are available they can be used to relate the flow to the depth of flooding, or synthetic rating curves can be derived using the tools in the toolkit and some ancillary data/assumptions. The results can be published as time-enabled spatial services to be consumed by web applications that use floodplain information as an input. Some of the existing online presentation templates can be easily combined with available online demographic and infrastructure data to present the impact of the potential floods on the local community through simple, end user products. This framework has been successfully used in both the data rich environments as well as in locales with minimum available spatial and hydrographic data.
Nonlinear time series analysis of normal and pathological human walking
NASA Astrophysics Data System (ADS)
Dingwell, Jonathan B.; Cusumano, Joseph P.
2000-12-01
Characterizing locomotor dynamics is essential for understanding the neuromuscular control of locomotion. In particular, quantifying dynamic stability during walking is important for assessing people who have a greater risk of falling. However, traditional biomechanical methods of defining stability have not quantified the resistance of the neuromuscular system to perturbations, suggesting that more precise definitions are required. For the present study, average maximum finite-time Lyapunov exponents were estimated to quantify the local dynamic stability of human walking kinematics. Local scaling exponents, defined as the local slopes of the correlation sum curves, were also calculated to quantify the local scaling structure of each embedded time series. Comparisons were made between overground and motorized treadmill walking in young healthy subjects and between diabetic neuropathic (NP) patients and healthy controls (CO) during overground walking. A modification of the method of surrogate data was developed to examine the stochastic nature of the fluctuations overlying the nominally periodic patterns in these data sets. Results demonstrated that having subjects walk on a motorized treadmill artificially stabilized their natural locomotor kinematics by small but statistically significant amounts. Furthermore, a paradox previously present in the biomechanical literature that resulted from mistakenly equating variability with dynamic stability was resolved. By slowing their self-selected walking speeds, NP patients adopted more locally stable gait patterns, even though they simultaneously exhibited greater kinematic variability than CO subjects. Additionally, the loss of peripheral sensation in NP patients was associated with statistically significant differences in the local scaling structure of their walking kinematics at those length scales where it was anticipated that sensory feedback would play the greatest role. Lastly, stride-to-stride fluctuations in the walking patterns of all three subject groups were clearly distinguishable from linearly autocorrelated Gaussian noise. As a collateral benefit of the methodological approach taken in this study, some of the first steps at characterizing the underlying structure of human locomotor dynamics have been taken. Implications for understanding the neuromuscular control of locomotion are discussed.
Nonlinear Response of Layer Growth Dynamics in the Mixed Kinetics-Bulk-Transport Regime
NASA Technical Reports Server (NTRS)
Vekilov, Peter G.; Alexander, J. Iwan D.; Rosenberger, Franz
1996-01-01
In situ high-resolution interferometry on horizontal facets of the protein lysozyme reveal that the local growth rate R, vicinal slope p, and tangential (step) velocity v fluctuate by up to 80% of their average values. The time scale of these fluctuations, which occur under steady bulk transport conditions through the formation and decay of step bunches (macrosteps), is of the order of 10 min. The fluctuation amplitude of R increases with growth rate (supersaturation) and crystal size, while the amplitude of the v and p fluctuations changes relatively little. Based on a stability analysis for equidistant step trains in the mixed transport-interface-kinetics regime, we argue that the fluctuations originate from the coupling of bulk transport with nonlinear interface kinetics. Furthermore, step bunches moving across the interface in the direction of or opposite to the buoyancy-driven convective flow increase or decrease in height, respectively. This is in agreement with analytical treatments of the interaction of moving steps with solution flow. Major excursions in growth rate are associated with the formation of lattice defects (striations). We show that, in general, the system-dependent kinetic Peclet number, Pe(sub k) , i.e., the relative weight of bulk transport and interface kinetics in the control of the growth process, governs the step bunching dynamics. Since Pe(sub k) can be modified by either forced solution flow or suppression of buoyancy-driven convection under reduced gravity, this model provides a rationale for the choice of specific transport conditions to minimize the formation of compositional inhomogeneities under steady bulk nutrient crystallization conditions.
NASA Astrophysics Data System (ADS)
Weatherford, Charles; Gebremedhin, Daniel
2016-03-01
A new and efficient way of evolving a solution to an ordinary differential equation is presented. A finite element method is used where we expand in a convenient local basis set of functions that enforce both function and first derivative continuity across the boundaries of each element. We also implement an adaptive step size choice for each element that is based on a Taylor series expansion. The method is applied to solve for the eigenpairs of the one-dimensional soft-coulomb potential and the hard-coulomb limit is studied. The method is then used to calculate a numerical solution of the Kohn-Sham differential equation within the local density approximation is presented and is applied to the helium atom. Supported by the National Nuclear Security Agency, the Nuclear Regulatory Commission, and the Defense Threat Reduction Agency.
Gauss Seidel-type methods for energy states of a multi-component Bose Einstein condensate
NASA Astrophysics Data System (ADS)
Chang, Shu-Ming; Lin, Wen-Wei; Shieh, Shih-Feng
2005-01-01
In this paper, we propose two iterative methods, a Jacobi-type iteration (JI) and a Gauss-Seidel-type iteration (GSI), for the computation of energy states of the time-independent vector Gross-Pitaevskii equation (VGPE) which describes a multi-component Bose-Einstein condensate (BEC). A discretization of the VGPE leads to a nonlinear algebraic eigenvalue problem (NAEP). We prove that the GSI method converges locally and linearly to a solution of the NAEP if and only if the associated minimized energy functional problem has a strictly local minimum. The GSI method can thus be used to compute ground states and positive bound states, as well as the corresponding energies of a multi-component BEC. Numerical experience shows that the GSI converges much faster than JI and converges globally within 10-20 steps.
NASA Astrophysics Data System (ADS)
Poppe, Christian; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Wet compression moulding (WCM) provides large-scale production potential for continuously fiber reinforced components as a promising alternative to resin transfer moulding (RTM). Lower cycle times are possible due to parallelization of the process steps draping, infiltration and curing during moulding (viscous draping). Experimental and theoretical investigations indicate a strong mutual dependency between the physical mechanisms, which occur during draping and mould filling (fluid-structure-interaction). Thus, key process parameters, like fiber orientation, fiber volume fraction, cavity pressure and the amount and viscosity of the resin are physically coupled. To enable time and cost efficient product and process development throughout all design stages, accurate process simulation tools are desirable. Separated draping and mould filling simulation models, as appropriate for the sequential RTM-process, cannot be applied for the WCM process due to the above outlined physical couplings. Within this study, a two-dimensional Darcy-Propagation-Element (DPE-2D) based on a finite element formulation with additional control volumes (FE/CV) is presented, verified and applied to forming simulation of a generic geometry, as a first step towards a fluid-structure-interaction model taking into account simultaneous resin infiltration and draping. The model is implemented in the commercial FE-Solver Abaqus by means of several user subroutines considering simultaneous draping and 2D-infiltration mechanisms. Darcy's equation is solved with respect to a local fiber orientation. Furthermore, the material model can access the local fluid domain properties to update the mechanical forming material parameter, which enables further investigations on the coupled physical mechanisms.
Methodology for Designing Fault-Protection Software
NASA Technical Reports Server (NTRS)
Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin
2006-01-01
A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.
NASA Astrophysics Data System (ADS)
Gallego, C.; Costa, A.; Cuerva, A.
2010-09-01
Since nowadays wind energy can't be neither scheduled nor large-scale storaged, wind power forecasting has been useful to minimize the impact of wind fluctuations. In particular, short-term forecasting (characterised by prediction horizons from minutes to a few days) is currently required by energy producers (in a daily electricity market context) and the TSO's (in order to keep the stability/balance of an electrical system). Within the short-term background, time-series based models (i.e., statistical models) have shown a better performance than NWP models for horizons up to few hours. These models try to learn and replicate the dynamic shown by the time series of a certain variable. When considering the power output of wind farms, ramp events are usually observed, being characterized by a large positive gradient in the time series (ramp-up) or negative (ramp-down) during relatively short time periods (few hours). Ramp events may be motivated by many different causes, involving generally several spatial scales, since the large scale (fronts, low pressure systems) up to the local scale (wind turbine shut-down due to high wind speed, yaw misalignment due to fast changes of wind direction). Hence, the output power may show unexpected dynamics during ramp events depending on the underlying processes; consequently, traditional statistical models considering only one dynamic for the hole power time series may be inappropriate. This work proposes a Regime Switching (RS) model based on Artificial Neural Nets (ANN). The RS-ANN model gathers as many ANN's as different dynamics considered (called regimes); a certain ANN is selected so as to predict the output power, depending on the current regime. The current regime is on-line updated based on a gradient criteria, regarding the past two values of the output power. 3 Regimes are established, concerning ramp events: ramp-up, ramp-down and no-ramp regime. In order to assess the skillness of the proposed RS-ANN model, a single-ANN model (without regime classification) is adopted as a reference model. Both models are evaluated in terms of Improvement over Persistence on the Mean Square Error basis (IoP%) when predicting horizons form 1 time-step to 5. The case of a wind farm located in the complex terrain of Alaiz (north of Spain) has been considered. Three years of available power output data with a hourly resolution have been employed: two years for training and validation of the model and the last year for assessing the accuracy. Results showed that the RS-ANN overcame the single-ANN model for one step-ahead forecasts: the overall IoP% was up to 8.66% for the RS-ANN model (depending on the gradient criterion selected to consider the ramp regime triggered) and 6.16% for the single-ANN. However, both models showed similar accuracy for larger horizons. A locally-weighted evaluation during ramp events for one-step ahead was also performed. It was found that the IoP% during ramps-up increased from 17.60% (case of single-ANN) to 22.25% (case of RS-ANN); however, during the ramps-down events this improvement increased from 18.55% to 19.55%. Three main conclusions are derived from this case study: It highlights the importance of considering statistical models capable of differentiate several regimes showed by the output power time series in order to improve the forecasting during extreme events like ramps. On-line regime classification based on available power output data didn't seem to contribute to improve forecasts for horizons beyond one-step ahead. Tacking into account other explanatory variables (local wind measurements, NWP outputs) could lead to a better understanding of ramp events, improving the regime assessment also for further horizons. The RS-ANN model slightly overcame the single-ANN during ramp-down events. If further research reinforce this effect, special attention should be addressed to understand the underlying processes during ramp-down events.
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
Locally adaptive methods for KDE-based random walk models of reactive transport in porous media
NASA Astrophysics Data System (ADS)
Sole-Mari, G.; Fernandez-Garcia, D.
2017-12-01
Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.
Singh, Ajay V; Gollner, Michael J
2016-06-01
Modeling the realistic burning behavior of condensed-phase fuels has remained out of reach, in part because of an inability to resolve the complex interactions occurring at the interface between gas-phase flames and condensed-phase fuels. The current research provides a technique to explore the dynamic relationship between a combustible condensed fuel surface and gas-phase flames in laminar boundary layers. Experiments have previously been conducted in both forced and free convective environments over both solid and liquid fuels. A unique methodology, based on the Reynolds Analogy, was used to estimate local mass burning rates and flame heat fluxes for these laminar boundary layer diffusion flames utilizing local temperature gradients at the fuel surface. Local mass burning rates and convective and radiative heat feedback from the flames were measured in both the pyrolysis and plume regions by using temperature gradients mapped near the wall by a two-axis traverse system. These experiments are time-consuming and can be challenging to design as the condensed fuel surface burns steadily for only a limited period of time following ignition. The temperature profiles near the fuel surface need to be mapped during steady burning of a condensed fuel surface at a very high spatial resolution in order to capture reasonable estimates of local temperature gradients. Careful corrections for radiative heat losses from the thermocouples are also essential for accurate measurements. For these reasons, the whole experimental setup needs to be automated with a computer-controlled traverse mechanism, eliminating most errors due to positioning of a micro-thermocouple. An outline of steps to reproducibly capture near-wall temperature gradients and use them to assess local burning rates and heat fluxes is provided.
Singh, Ajay V.; Gollner, Michael J.
2016-01-01
Modeling the realistic burning behavior of condensed-phase fuels has remained out of reach, in part because of an inability to resolve the complex interactions occurring at the interface between gas-phase flames and condensed-phase fuels. The current research provides a technique to explore the dynamic relationship between a combustible condensed fuel surface and gas-phase flames in laminar boundary layers. Experiments have previously been conducted in both forced and free convective environments over both solid and liquid fuels. A unique methodology, based on the Reynolds Analogy, was used to estimate local mass burning rates and flame heat fluxes for these laminar boundary layer diffusion flames utilizing local temperature gradients at the fuel surface. Local mass burning rates and convective and radiative heat feedback from the flames were measured in both the pyrolysis and plume regions by using temperature gradients mapped near the wall by a two-axis traverse system. These experiments are time-consuming and can be challenging to design as the condensed fuel surface burns steadily for only a limited period of time following ignition. The temperature profiles near the fuel surface need to be mapped during steady burning of a condensed fuel surface at a very high spatial resolution in order to capture reasonable estimates of local temperature gradients. Careful corrections for radiative heat losses from the thermocouples are also essential for accurate measurements. For these reasons, the whole experimental setup needs to be automated with a computer-controlled traverse mechanism, eliminating most errors due to positioning of a micro-thermocouple. An outline of steps to reproducibly capture near-wall temperature gradients and use them to assess local burning rates and heat fluxes is provided. PMID:27285827
ERIC Educational Resources Information Center
Goldman, Charles I.
The manual is part of a series to assist in planning procedures for local and State vocational agencies. It details steps required to process a local education agency's data after the data have been coded onto keypunch forms. Program, course, and overhead data are input into a computer data base and error checks are performed. A computer model is…
Forward the Foundation: Local Education Foundations Offer an Alternative Source for School Funding
ERIC Educational Resources Information Center
Brooks-Young, Susan
2007-01-01
February's column "Going Corporate" discussed ideas for approaching private foundations for funding. Some districts take this idea several steps further by partnering with the community and local businesses to establish a not-for-profit foundation, or local education foundation (LEF). It probably comes as no surprise that the idea of forming a LEF…
Investing in Our Future: A Handbook for Teaching Local Government.
ERIC Educational Resources Information Center
Bjornland, Lydia D.
This resource book for local government officials, teachers, and civic leaders is designed to aid in the production of materials and the establishment of programs to educate young people about the role local government plays in their lives. Practical guidelines outline the steps that need to be taken to initiate a successful program, including…
The Relaxation of Vicinal (001) with ZigZag [110] Steps
NASA Astrophysics Data System (ADS)
Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.
2012-02-01
This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.
Efforts to Overcome Difficulties in a Higher Education Meteorology Department Institution
NASA Astrophysics Data System (ADS)
Mota, G. V.; Souza, J. R.; Ribeiro, J. B.; Souza, E. B.; Gomes, N. V.; Oliveira, R. A.; Ameida, W. G.; Chagas, G. O.; Yoksas, T.; Spangler, T.; Cutrim, E.
2007-05-01
The development of cyberinfrastructure in higher education meteorology departments has become a key requirement to better qualify their students and develop scientific research. The authors present their efforts to overcome low budget, lack of personnel, and other difficulties in the Department of Meteorology, Universidade Federal do Pará (UFPA), to participate in international collaborations for sharing hydro-meteorological data, tools and technological systems. Some important steps towards a consolidated integration of the group with the international partnership are discussed, and three are highlighted: (a) the resources from the Unidata's Equipment Award (supported by the National Science Foundation - NSF) and equipment donated in cooperation with the COMET and Meteoforum projects; (b) the interaction of the local team making its project resources available to the community; and (c) the involvement of students with the programs and the cyberinfrastructure available locally. Some positive results can be observed, such as the ability for students of Synoptic Meteorology II class to not only see static meteorological fields on the web, but actually build themselves regional and real-time synoptic products from the data received through Unidata's Internet Data Distribution (IDD) system. Moreover, the UFPA's group intends to improve its infrastructure to expand the access of real-time data and products to other members of the local meteorological community.
Interactive Dose Shaping - efficient strategies for CPU-based real-time treatment planning
NASA Astrophysics Data System (ADS)
Ziegenhein, P.; Kamerling, C. P.; Oelfke, U.
2014-03-01
Conventional intensity modulated radiation therapy (IMRT) treatment planning is based on the traditional concept of iterative optimization using an objective function specified by dose volume histogram constraints for pre-segmented VOIs. This indirect approach suffers from unavoidable shortcomings: i) The control of local dose features is limited to segmented VOIs. ii) Any objective function is a mathematical measure of the plan quality, i.e., is not able to define the clinically optimal treatment plan. iii) Adapting an existing plan to changed patient anatomy as detected by IGRT procedures is difficult. To overcome these shortcomings, we introduce the method of Interactive Dose Shaping (IDS) as a new paradigm for IMRT treatment planning. IDS allows for a direct and interactive manipulation of local dose features in real-time. The key element driving the IDS process is a two-step Dose Modification and Recovery (DMR) strategy: A local dose modification is initiated by the user which translates into modified fluence patterns. This also affects existing desired dose features elsewhere which is compensated by a heuristic recovery process. The IDS paradigm was implemented together with a CPU-based ultra-fast dose calculation and a 3D GUI for dose manipulation and visualization. A local dose feature can be implemented via the DMR strategy within 1-2 seconds. By imposing a series of local dose features, equal plan qualities could be achieved compared to conventional planning for prostate and head and neck cases within 1-2 minutes. The idea of Interactive Dose Shaping for treatment planning has been introduced and first applications of this concept have been realized.
Hager, B; Kraywinkel, K; Keck, B; Katalinic, A; Meyer, M; Zeissig, S R; Scheufele, R; Wirth, M P; Huber, J
2017-03-01
Current guidelines do not recommend a preferred treatment modality for locally advanced prostate cancer. The aim of the study was to compare treatment patterns found in the USA and Germany and to analyze possible trends over time. We compared 'Surveillance Epidemiology and End Results' (SEER) data (USA) with reports from four German federal epidemiological cancer registries (Eastern Germany, Bavaria, Rhineland-Palatinate, Schleswig-Holstein), both from 2004 to 2012. We defined locally advanced prostate cancer as clinical stage T3 or T4. Exclusion criteria were metastatic disease and age over 79 years. We identified 9127 (USA) and 11 051 (Germany) patients with locally advanced prostate cancer. The share was 2.1% in the USA compared with 6.0% in Germany (P<0.001). In the United States, the utilization of radiotherapy (RT) and radical prostatectomy (RP) was comparably high with 42.0% (RT) and 42.8% (RP). In Germany, the major treatment option was RP with 36.7% followed by RT with 22.1%. During the study period, the use of RP increased in both countries (USA P=0.001 and Germany P=0.003), whereas RT numbers declined (USA P=0.003 and Germany P=0.002). The share of adjuvant RT (aRT) was similar in both countries (USA 21.7% vs Germany 20.7%). We found distinctive differences in treating locally advanced prostate cancer between USA and Germany, but similar trends over time. In the last decade, a growing number of patients underwent RP as a possible first step within a multimodal concept.
Freitag, L E; Tyack, P L
1993-04-01
A method for localization and tracking of calling marine mammals was tested under realistic field conditions that include noise, multipath, and arbitrarily located sensors. Experiments were performed in two locations using four and six hydrophones with captive Atlantic bottlenose dolphins (Tursiops truncatus). Acoustic signals from the animals were collected in the field using a digital acoustic data acquisition system. The data were then processed off-line to determine relative hydrophone positions and the animal locations. Accurate hydrophone position estimates are achieved by pinging sequentially from each hydrophone to all the others. A two-step least-squares algorithm is then used to determine sensor locations from the calibration data. Animal locations are determined by estimating the time differences of arrival of the dolphin signals at the different sensors. The peak of a matched filter output or the first cycle of the observed waveform is used to determine arrival time of an echolocation click. Cross correlation between hydrophones is used to determine inter-sensor time delays of whistles. Calculation of source location using the time difference of arrival measurements is done using a least-squares solution to minimize error. These preliminary experimental results based on a small set of data show that realistic trajectories for moving animals may be generated from consecutive location estimates.
Monte Carlo grain growth modeling with local temperature gradients
NASA Astrophysics Data System (ADS)
Tan, Y.; Maniatty, A. M.; Zheng, C.; Wen, J. T.
2017-09-01
This work investigated the development of a Monte Carlo (MC) simulation approach to modeling grain growth in the presence of non-uniform temperature field that may vary with time. We first scale the MC model to physical growth processes by fitting experimental data. Based on the scaling relationship, we derive a grid site selection probability (SSP) function to consider the effect of a spatially varying temperature field. The SSP function is based on the differential MC step, which allows it to naturally consider time varying temperature fields too. We verify the model and compare the predictions to other existing formulations (Godfrey and Martin 1995 Phil. Mag. A 72 737-49 Radhakrishnan and Zacharia 1995 Metall. Mater. Trans. A 26 2123-30) in simple two-dimensional cases with only spatially varying temperature fields, where the predicted grain growth in regions of constant temperature are expected to be the same as for the isothermal case. We also test the model in a more realistic three-dimensional case with a temperature field varying in both space and time, modeling grain growth in the heat affected zone of a weld. We believe the newly proposed approach is promising for modeling grain growth in material manufacturing processes that involves time-dependent local temperature gradient.
NASA Astrophysics Data System (ADS)
Agapitov, O. V.; Mozer, F.; Artemyev, A.; Krasnoselskikh, V.; Lejosne, S.
2014-12-01
A huge number of different non-linear structures (double layers, electron holes, non-linear whistlers, etc) have been observed by the electric field experiment on the Van Allen Probes in conjunction with relativistic electron acceleration in the Earth's outer radiation belt. These structures, found as short duration (~0.1 msec) quasi-periodic bursts of electric field in the high time resolution electric field waveform, have been called Time Domain Structures (TDS). They can quite effectively interact with radiation belt electrons. Due to the trapping of electrons into these non-linear structures, they are accelerated up to ~10 keV and their pitch angles are changed, especially for low energies (˜1 keV). Large amplitude electric field perturbations cause non-linear resonant trapping of electrons into the effective potential of the TDS and these electrons are then accelerated in the non-homogeneous magnetic field. These locally accelerated electrons create the "seed population" of several keV electrons that can be accelerated by coherent, large amplitude, upper band whistler waves to MeV energies in this two step acceleration process. All the elements of this chain acceleration mechanism have been observed by the Van Allen Probes.
Critical Song Features for Auditory Pattern Recognition in Crickets
Meckenhäuser, Gundula; Hennig, R. Matthias; Nawrot, Martin P.
2013-01-01
Many different invertebrate and vertebrate species use acoustic communication for pair formation. In the cricket Gryllus bimaculatus, females recognize their species-specific calling song and localize singing males by positive phonotaxis. The song pattern of males has a clear structure consisting of brief and regular pulses that are grouped into repetitive chirps. Information is thus present on a short and a long time scale. Here, we ask which structural features of the song critically determine the phonotactic performance. To this end we employed artificial neural networks to analyze a large body of behavioral data that measured females’ phonotactic behavior under systematic variation of artificially generated song patterns. In a first step we used four non-redundant descriptive temporal features to predict the female response. The model prediction showed a high correlation with the experimental results. We used this behavioral model to explore the integration of the two different time scales. Our result suggested that only an attractive pulse structure in combination with an attractive chirp structure reliably induced phonotactic behavior to signals. In a further step we investigated all feature sets, each one consisting of a different combination of eight proposed temporal features. We identified feature sets of size two, three, and four that achieve highest prediction power by using the pulse period from the short time scale plus additional information from the long time scale. PMID:23437054
[Community health in primary health care teams: a management objective].
Nebot Adell, Carme; Pasarin Rua, Maribel; Canela Soler, Jaume; Sala Alvarez, Clara; Escosa Farga, Alex
2016-12-01
To describe the process of development of community health in a territory where the Primary Health Care board decided to include it in its roadmap as a strategic line. Evaluative research using qualitative techniques, including SWOT analysis on community health. Two-steps study. Primary care teams (PCT) of the Catalan Health Institute in Barcelona city. The 24 PCT belonging to the Muntanya-Dreta Primary Care Service in Barcelona city, with 904 professionals serving 557,430 inhabitants. Application of qualitative methodology using SWOT analysis in two steps (two-step study). Step 1: Setting up a core group consisting of local PCT professionals; collecting the community projects across the territory; SWOT analysis. Step 2: From the needs identified in the previous phase, a plan was developed, including a set of training activities in community health: basic, advanced, and a workshop to exchange experiences from the PCTs. A total of 80 team professionals received specific training in the 4 workshops held, one of them an advanced level. Two workshops were held to exchange experiences with 165 representatives from the local teams, and 22 PCTs presenting their practices. In 2013, 6 out of 24 PCTs have had a community diagnosis performed. Community health has achieved a good level of development in some areas, but this is not the general situation in the health care system. Its progression depends on the management support they have, the local community dynamics, and the scope of the Primary Health Care. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
Computer aided detection of brain micro-bleeds in traumatic brain injury
NASA Astrophysics Data System (ADS)
van den Heuvel, T. L. A.; Ghafoorian, M.; van der Eerden, A. W.; Goraj, B. M.; Andriessen, T. M. J. C.; ter Haar Romeny, B. M.; Platel, B.
2015-03-01
Brain micro-bleeds (BMBs) are used as surrogate markers for detecting diffuse axonal injury in traumatic brain injury (TBI) patients. The location and number of BMBs have been shown to influence the long-term outcome of TBI. To further study the importance of BMBs for prognosis, accurate localization and quantification are required. The task of annotating BMBs is laborious, complex and prone to error, resulting in a high inter- and intra-reader variability. In this paper we propose a computer-aided detection (CAD) system to automatically detect BMBs in MRI scans of moderate to severe neuro-trauma patients. Our method consists of four steps. Step one: preprocessing of the data. Both susceptibility (SWI) and T1 weighted MRI scans are used. The images are co-registered, a brain-mask is generated, the bias field is corrected, and the image intensities are normalized. Step two: initial candidates for BMBs are selected as local minima in the processed SWI scans. Step three: feature extraction. BMBs appear as round or ovoid signal hypo-intensities on SWI. Twelve features are computed to capture these properties of a BMB. Step four: Classification. To identify BMBs from the set of local minima using their features, different classifiers are trained on a database of 33 expert annotated scans and 18 healthy subjects with no BMBs. Our system uses a leave-one-out strategy to analyze its performance. With a sensitivity of 90% and 1.3 false positives per BMB, our CAD system shows superior results compared to state-of-the-art BMB detection algorithms (developed for non-trauma patients).
Pumping Competition into Political Communication.
ERIC Educational Resources Information Center
Clarke, Peter; Evans, Susan H.
1986-01-01
States that the system of local political journalism has become fragile and that steps should be taken to remediate the situation, such as developing tax incentives that invite television news, distributed in nonbroadcast form, to enter the arena of local political information. (DF)
Knee implant imaging at 3 Tesla using high-bandwidth radiofrequency pulses.
Bachschmidt, Theresa J; Sutter, Reto; Jakob, Peter M; Pfirrmann, Christian W A; Nittka, Mathias
2015-06-01
To investigate the impact of high-bandwidth radiofrequency (RF) pulses used in turbo spin echo (TSE) sequences or combined with slice encoding for metal artifact correction (SEMAC) on artifact reduction at 3 Tesla in the knee in the presence of metal. Local transmit/receive coils feature increased maximum B1 amplitude, reduced SAR exposition and thus enable the application of high-bandwidth RF pulses. Susceptibility-induced through-plane distortion scales inversely with the RF bandwidth and the view angle, hence blurring, increases for higher RF bandwidths, when SEMAC is used. These effects were assessed for a phantom containing a total knee arthroplasty. TSE and SEMAC sequences with conventional and high RF bandwidths and different contrasts were tested on eight patients with different types of implants. To realize scan times of 7 to 9 min, SEMAC was always applied with eight slice-encoding steps and distortion was rated by two radiologists. A local transmit/receive knee coil enables the use of an RF bandwidth of 4 kHz compared with 850 Hz in conventional sequences. Phantom scans confirm the relation of RF bandwidth and through-plane distortion, which can be reduced up to 79%, and demonstrate the increased blurring for high-bandwidth RF pulses. In average, artifacts in this RF mode are rated hardly visible for patients with joint arthroplasties, when eight SEMAC slice-encoding steps are applied, and for patients with titanium fixtures, when TSE is used. The application of high-bandwidth RF pulses by local transmit coils substantially reduces through-plane distortion artifacts at 3 Tesla. © 2014 Wiley Periodicals, Inc.
Quantum Drude friction for time-dependent density functional theory
NASA Astrophysics Data System (ADS)
Neuhauser, Daniel; Lopata, Kenneth
2008-10-01
Friction is a desired property in quantum dynamics as it allows for localization, prevents backscattering, and is essential in the description of multistage transfer. Practical approaches for friction generally involve memory functionals or interactions with system baths. Here, we start by requiring that a friction term will always reduce the energy of the system; we show that this is automatically true once the Hamiltonian is augmented by a term of the form ∫a(q ;n0)[∂j(q,t)/∂t]ṡJ(q)dq, which includes the current operator times the derivative of its expectation value with respect to time, times a local coefficient; the local coefficient will be fitted to experiment, to more sophisticated theories of electron-electron interaction and interaction with nuclear vibrations and the nuclear background, or alternately, will be artificially constructed to prevent backscattering of energy. We relate this term to previous results and to optimal control studies, and generalize it to further operators, i.e., any operator of the form ∫a(q ;n0)[∂c(q,t)/∂t]ṡC(q)dq (or a discrete sum) will yield friction. Simulations of a small jellium cluster, both in the linear and highly nonlinear excitation regime, demonstrate that the friction always reduces energy. The energy damping is essentially double exponential; the long-time decay is almost an order of magnitude slower than the rapid short-time decay. The friction term stabilizes the propagation (split-operator propagator here), therefore increasing the time-step needed for convergence, i.e., reducing the overall computational cost. The local friction also allows the simulation of a metal cluster in a uniform jellium as the energy loss in the excitation due to the underlying corrugation is accounted for by the friction. We also relate the friction to models of coupling to damped harmonic oscillators, which can be used for a more sophisticated description of the coupling, and to memory functionals. Our results open the way to very simple finite grid description of scattering and multistage conductance using time-dependent density functional theory away from the linear regime, just as absorbing potentials and self-energies are useful for noninteracting systems and leads.
NASA Astrophysics Data System (ADS)
Bengulescu, Marc; Blanc, Philippe; Wald, Lucien
2016-04-01
An analysis of the variability of the surface solar irradiance (SSI) at different local time-scales is presented in this study. Since geophysical signals, such as long-term measurements of the SSI, are often produced by the non-linear interaction of deterministic physical processes that may also be under the influence of non-stationary external forcings, the Hilbert-Huang transform (HHT), an adaptive, noise-assisted, data-driven technique, is employed to extract locally - in time and in space - the embedded intrinsic scales at which a signal oscillates. The transform consists of two distinct steps. First, by means of the Empirical Mode Decomposition (EMD), the time-series is "de-constructed" into a finite number - often small - of zero-mean components that have distinct temporal scales of variability, termed hereinafter the Intrinsic Mode Functions (IMFs). The signal model of the components is an amplitude modulation - frequency modulation (AM - FM) one, and can also be thought of as an extension of a Fourier series having both time varying amplitude and frequency. Following the decomposition, Hilbert spectral analysis is then employed on the IMFs, yielding a time-frequency-energy representation that portrays changes in the spectral contents of the original data, with respect to time. As measurements of surface solar irradiance may possibly be contaminated by the manifestation of different type of stochastic processes (i.e. noise), the identification of real, physical processes from this background of random fluctuations is of interest. To this end, an adaptive background noise null hypothesis is assumed, based on the robust statistical properties of the EMD when applied to time-series of different classes of noise (e.g. white, red or fractional Gaussian). Since the algorithm acts as an efficient constant-Q dyadic, "wavelet-like", filter bank, the different noise inputs are decomposed into components having the same spectral shape, but that are translated to the next lower octave in the spectral domain. Thus, when the sampling step is increased, the spectral shape of IMFs cannot remain at its original position, due to the new lower Nyquist frequency, and is instead pushed toward the lower scaled frequency. Based on these features, the identification of potential signals within the data should become possible without any prior knowledge of the background noises. When applying the above outlined procedure to decennial time-series of surface solar irradiance, only the component that has an annual time-scale of variability is shown to have statistical properties that diverge from those of noise. Nevertheless, the noise-like components are not completely devoid of information, as it is found that their AM components have a non-null rank correlation coefficient with the annual mode, i.e. the background noise intensity seems to be modulated by the seasonal cycle. The findings have possible implications on the modelling and forecast of the surface solar irradiance, by discriminating its deterministic from its quasi-stochastic constituents, at distinct local time-scales.
Simulation of stochastic diffusion via first exit times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lötstedt, Per, E-mail: perl@it.uu.se; Meinecke, Lina, E-mail: lina.meinecke@it.uu.se
2015-11-01
In molecular biology it is of interest to simulate diffusion stochastically. In the mesoscopic model we partition a biological cell into unstructured subvolumes. In each subvolume the number of molecules is recorded at each time step and molecules can jump between neighboring subvolumes to model diffusion. The jump rates can be computed by discretizing the diffusion equation on that unstructured mesh. If the mesh is of poor quality, due to a complicated cell geometry, standard discretization methods can generate negative jump coefficients, which no longer allows the interpretation as the probability to jump between the subvolumes. We propose a methodmore » based on the mean first exit time of a molecule from a subvolume, which guarantees positive jump coefficients. Two approaches to exit times, a global and a local one, are presented and tested in simulations on meshes of different quality in two and three dimensions.« less
NASA Astrophysics Data System (ADS)
Luo, Win-Jet; Yue, Cheng-Feng
2004-12-01
This paper investigates two-dimensional, time-dependent electroosmotic flows driven by an AC electric field via patchwise surface heterogeneities distributed along the microchannel walls. The time-dependent flow fields through the microchannel are simulated for various patchwise heterogeneous surface patterns using the backwards-Euler time stepping numerical method. Different heterogeneous surface patterns are found to create significantly different electrokinetic transport phenomena. It is shown that the presence of oppositely charged surface heterogeneities on the microchannel walls results in the formation of localized flow circulations within the bulk flow. These circulation regions grow and decay periodically in accordance with the applied periodic AC electric field intensity. The circulations provide an effective means of enhancing species mixing in the microchannel. A suitable design of the patchwise heterogeneous surface pattern permits the mixing channel length and the retention time required to attain a homogeneous solution to be reduced significantly.
Simulation of stochastic diffusion via first exit times
Lötstedt, Per; Meinecke, Lina
2015-01-01
In molecular biology it is of interest to simulate diffusion stochastically. In the mesoscopic model we partition a biological cell into unstructured subvolumes. In each subvolume the number of molecules is recorded at each time step and molecules can jump between neighboring subvolumes to model diffusion. The jump rates can be computed by discretizing the diffusion equation on that unstructured mesh. If the mesh is of poor quality, due to a complicated cell geometry, standard discretization methods can generate negative jump coefficients, which no longer allows the interpretation as the probability to jump between the subvolumes. We propose a method based on the mean first exit time of a molecule from a subvolume, which guarantees positive jump coefficients. Two approaches to exit times, a global and a local one, are presented and tested in simulations on meshes of different quality in two and three dimensions. PMID:26600600
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7
Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner.
An, Jhonghyun; Choi, Baehoon; Sim, Kwee-Bo; Kim, Euntai
2016-07-20
There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation.
Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner
An, Jhonghyun; Choi, Baehoon; Sim, Kwee-Bo; Kim, Euntai
2016-01-01
There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation. PMID:27447640
NASA Astrophysics Data System (ADS)
Harmon, Michael; Gamba, Irene M.; Ren, Kui
2016-12-01
This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.
Suprayitno, Nano; Narakusumo, Raden Pramesa; von Rintelen, Thomas; Hendrich, Lars; Balke, Michael
2017-01-01
Taxonomy and biogeography can benefit from citizen scientists. The use of social networking and open access cooperative publishing can easily connect naturalists even in more remote areas with in-country scientists and institutions, as well as those abroad. This enables taxonomic efforts without frontiers and at the same time adequate benefit sharing measures. We present new distribution and habitat data for diving beetles of Bali island, Indonesia, as a proof of concept. The species Hydaticus luczonicus Aubé, 1838 and Eretes griseus (Fabricius, 1781) are reported from Bali for the first time. The total number of Dytiscidae species known from Bali is now 34.
Coherence properties of nanofiber-trapped cesium atoms.
Reitz, D; Sayrin, C; Mitsch, R; Schneeweiss, P; Rauschenbeutel, A
2013-06-14
We experimentally study the ground state coherence properties of cesium atoms in a nanofiber-based two-color dipole trap, localized ∼ 200 nm away from the fiber surface. Using microwave radiation to coherently drive the clock transition, we record Ramsey fringes as well as spin echo signals and infer a reversible dephasing time of T(2)(*) = 0.6 ms and an irreversible dephasing time of T(2)(') = 3.7 ms. By modeling the signals, we find that, for our experimental parameters, T(2)(*) and T(2)(') are limited by the finite initial temperature of the atomic ensemble and the heating rate, respectively. Our results represent a fundamental step towards establishing nanofiber-based traps for cold atoms as a building block in an optical fiber quantum network.
Suprayitno, Nano; Narakusumo, Raden Pramesa; von Rintelen, Thomas; Hendrich, Lars
2017-01-01
Abstract Background Taxonomy and biogeography can benefit from citizen scientists. The use of social networking and open access cooperative publishing can easily connect naturalists even in more remote areas with in-country scientists and institutions, as well as those abroad. This enables taxonomic efforts without frontiers and at the same time adequate benefit sharing measures. New information We present new distribution and habitat data for diving beetles of Bali island, Indonesia, as a proof of concept. The species Hydaticus luczonicus Aubé, 1838 and Eretes griseus (Fabricius, 1781) are reported from Bali for the first time. The total number of Dytiscidae species known from Bali is now 34. PMID:29104436
Low Earth Orbital Mission Aboard the Space Test Experiments Platform (STEP-3)
NASA Technical Reports Server (NTRS)
Brinza, David E.
1992-01-01
A discussion of the Space Active Modular Materials Experiments (SAMMES) is presented in vugraph form. The discussion is divided into three sections: (1) a description of SAMMES; (2) a SAMMES/STEP-3 mission overview; and (3) SAMMES follow on efforts. The SAMMES/STEP-3 mission objectives are as follows: assess LEO space environmental effects on SDIO materials; quantify orbital and local environments; and demonstrate the modular experiment concept.
Practical Steps for Using Interdisciplinary Educational Research to Enhance Cultural Awareness
ERIC Educational Resources Information Center
CohenMiller, A. S.; Faucher, Carole; Hernández-Torrano, Daniel; Brown Hajdukova, Eva
2017-01-01
This article adds to the dialogue on multidisciplinary and interdisciplinary research, providing definitions and practical steps for using interdisciplinary educational research to enhance cultural awareness. Informed by a research study conducted by seven primary researchers situated in the U.K. and Kazakhstan, along with local partners, we…
Improving Program Performance through Management Information. A Workbook.
ERIC Educational Resources Information Center
Bienia, Nancy
Designed specifically for state and local managers and supervisors who plan, direct, and operate child support enforcement programs, this workbook provides a four-part, step-by-step process for identifying needed information and methods of using the information to operate an effective program. The process consists of: (1) determining what…
This Second Edition of the Compendium has been prepared to provide regional, state and local environmental regulatory agencies with step-by-step sampling and analysis procedures for the determination of selected toxic organic pollutants in ambient air. It is designed to assist t...
NASA Astrophysics Data System (ADS)
Lee, Ji-Seok; Song, Ki-Won
2015-11-01
The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.
Audiovisual integration increases the intentional step synchronization of side-by-side walkers.
Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A
2017-12-01
When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.
Assessing the performance of regional landslide early warning models: the EDuMaP method
NASA Astrophysics Data System (ADS)
Calvello, M.; Piciullo, L.
2015-10-01
The paper proposes the evaluation of the technical performance of a regional landslide early warning system by means of an original approach, called EDuMaP method, comprising three successive steps: identification and analysis of the Events (E), i.e. landslide events and warning events derived from available landslides and warnings databases; definition and computation of a Duration Matrix (DuMa), whose elements report the time associated with the occurrence of landslide events in relation to the occurrence of warning events, in their respective classes; evaluation of the early warning model Performance (P) by means of performance criteria and indicators applied to the duration matrix. During the first step, the analyst takes into account the features of the warning model by means of ten input parameters, which are used to identify and classify landslide and warning events according to their spatial and temporal characteristics. In the second step, the analyst computes a time-based duration matrix having a number of rows and columns equal to the number of classes defined for the warning and landslide events, respectively. In the third step, the analyst computes a series of model performance indicators derived from a set of performance criteria, which need to be defined by considering, once again, the features of the warning model. The proposed method is based on a framework clearly distinguishing between local and regional landslide early warning systems as well as among correlation laws, warning models and warning systems. The applicability, potentialities and limitations of the EDuMaP method are tested and discussed using real landslides and warnings data from the municipal early warning system operating in Rio de Janeiro (Brazil).
A Meta-Analysis of Local Climate Change Adaptation Actions ...
Local governments are beginning to take steps to address the consequences of climate change, such as sea level rise and heat events. However, we do not have a clear understanding of what local governments are doing -- the extent to which they expect climate change to affect their community, the types of actions they have in place to address climate change, and the resources at their disposal for implementation. Several studies have been conducted by academics, non-governmental organizations, and public agencies to assess the status of local climate change adaptation. This project collates the findings from dozens of such studies to conduct a meta-analysis of local climate change adaptation actions. The studies will be characterized along several dimensions, including (a) methods used, (b) timing and geographic scope, (c) topics covered, (d) types of adaptation actions identified, (e) implementation status, and (f) public engagement and environmental justice dimensions considered. The poster presents the project's rationale and approach and some illustrative findings from early analyses. [Note: The document being reviewed is an abstract in which a poster is being proposed. The poster will enter clearance if the abstract is accepted] The purpose of this poster is to present the research framework and approaches I am developing for my ORISE postdoctoral project, and to get feedback on early analyses.
An adaptive time-stepping strategy for solving the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less
Localizing Ground Penetrating RADAR: A Step Towards Robust Autonomous Ground Vehicle Localization
2016-07-14
localization designed to complement existing approaches with a low sensitivity to failure modes of LIDAR, camera, and GPS/INS sensors due to its low...the detailed design and results from highway testing, which uses a simple heuristic for fusing LGPR estimates with a GPS/INS system. Cross-track... designed to enable a priori map-based local- ization. LGPR offers complementary capabilities to tradi- tional optics-based approaches to map-based
NASA Astrophysics Data System (ADS)
Piatkowski, Marian; Müthing, Steffen; Bastian, Peter
2018-03-01
In this paper we consider discontinuous Galerkin (DG) methods for the incompressible Navier-Stokes equations in the framework of projection methods. In particular we employ symmetric interior penalty DG methods within the second-order rotational incremental pressure correction scheme. The major focus of the paper is threefold: i) We propose a modified upwind scheme based on the Vijayasundaram numerical flux that has favourable properties in the context of DG. ii) We present a novel postprocessing technique in the Helmholtz projection step based on H (div) reconstruction of the pressure correction that is computed locally, is a projection in the discrete setting and ensures that the projected velocity satisfies the discrete continuity equation exactly. As a consequence it also provides local mass conservation of the projected velocity. iii) Numerical results demonstrate the properties of the scheme for different polynomial degrees applied to two-dimensional problems with known solution as well as large-scale three-dimensional problems. In particular we address second-order convergence in time of the splitting scheme as well as its long-time stability.
Impact of the 1997-1998 El-Nino of Regional Hydrology
NASA Technical Reports Server (NTRS)
Lakshmi, Venkataraman; Susskind, Joel
1998-01-01
The 1997-1998 El-Nino brought with it a range of severe local-regional hydrological phenomena. Record high temperatures and extremely dry soil conditions in Texas is an example of this regional effect. The El-Nino and La-Nina change the continental weather patterns considerably. However, connections between continental weather anomalies and regional or local anomalies have not been established to a high degree of confidence. There are several unique features of the recent El-Nino and La-Nina. Due to the recognition of the present El-Nino well in advance, there have been several coupled model studies on global and regional scales. Secondly, there is a near real-time monitoring of the situation using data from satellite sensors, namely, SeaWIFS, TOVS, AVHRR and GOES. Both observations and modeling characterize the large scale features of this El-Nino fairly well. However the connection to the local and regional hydrological phenomenon still needs to be made. This paper will use satellite observations and analysis data to establish a relation between local hydrology and large scale weather patterns. This will be the first step in using satellite data to perform regional hydrological simulations of surface temperature and soil moisture.
Consistent and robust determination of border ownership based on asymmetric surrounding contrast.
Sakai, Ko; Nishimura, Haruka; Shimizu, Ryohei; Kondo, Keiichi
2012-09-01
Determination of the figure region in an image is a fundamental step toward surface construction, shape coding, and object representation. Localized, asymmetric surround modulation, reported neurophysiologically in early-to-intermediate-level visual areas, has been proposed as a mechanism for figure-ground segregation. We investigated, computationally, whether such surround modulation is capable of yielding consistent and robust determination of figure side for various stimuli. Our surround modulation model showed a surprisingly high consistency among pseudorandom block stimuli, with greater consistency for stimuli that yielded higher accuracy of, and shorter reaction times in, human perception. Our analyses revealed that the localized, asymmetric organization of surrounds is crucial in the detection of the contrast imbalance that leads to the determination of the direction of figure with respect to the border. The model also exhibited robustness for gray-scaled natural images, with a mean correct rate of 67%, which was similar to that of figure-side determination in human perception through a small window and of machine-vision algorithms based on local processing. These results suggest a crucial role of surround modulation in the local processing of figure-ground segregation. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, A. D.; Jones, B. J. P.; Nygren, D. R.
A new method to tag the barium daughter in the double beta decay ofmore » $$^{136}$$Xe is reported. Using the technique of single molecule fluorescent imaging (SMFI), individual barium dication (Ba$$^{++}$$) resolution at a transparent scanning surface has been demonstrated. A single-step photo-bleach confirms the single ion interpretation. Individual ions are localized with super-resolution ($$\\sim$$2~nm), and detected with a statistical significance of 12.9~$$\\sigma$$ over backgrounds. This lays the foundation for a new and potentially background-free neutrinoless double beta decay technology, based on SMFI coupled to high pressure xenon gas time projection chambers.« less
Efficient computation of PDF-based characteristics from diffusion MR signal.
Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc
2008-01-01
We present a general method for the computation of PDF-based characteristics of the tissue micro-architecture in MR imaging. The approach relies on the approximation of the MR signal by a series expansion based on Spherical Harmonics and Laguerre-Gaussian functions, followed by a simple projection step that is efficiently done in a finite dimensional space. The resulting algorithm is generic, flexible and is able to compute a large set of useful characteristics of the local tissues structure. We illustrate the effectiveness of this approach by showing results on synthetic and real MR datasets acquired in a clinical time-frame.
Surface topography and electrical properties in Sr2FeMoO6 films studied at cryogenic temperatures
NASA Astrophysics Data System (ADS)
Angervo, I.; Saloaro, M.; Mäkelä, J.; Lehtiö, J.-P.; Huhtinen, H.; Paturi, P.
2018-03-01
Pulsed laser deposited Sr2FeMoO6 thin films were investigated for the first time with scanning tunneling microscopy and spectroscopy. The results confirm atomic scale layer growth, with step-terrace structure corresponding to a single lattice cell scale. The spectroscopy research reveals a distribution of local electrical properties linked to structural deformation in the initial thin film layers at the film substrate interface. Significant hole structure giving rise to electrically distinctive regions in thinner film also seems to set a thickness limit for the thinnest films to be used in applications.
Highly Extensible Programmed Biosensing Circuits with Fast Memory
2011-12-16
single-cell imaging in microfluidic environment. Yeast strain YTS2ab_1 has constitutive Hog1-eGFP production and thus upon a step function of sorbitol ...expect a sorbitol pulse to cause Hog1-NeGFP to localize to the nucleus, and the resulting Hog1-Hot1 interaction to drive nuclear fluorescence...YTS2ab_3 – W303-A background, hot1D::loxP, hog1D::loxP, HO::Hog1:Hog1-NeGFP_Hot1:Hot1-CeGFP Time = 5 min prior to Sorbitol Pulse (A) Brightfield, 63X Oil
2–stage stochastic Runge–Kutta for stochastic delay differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosli, Norhayati; Jusoh Awang, Rahimah; Bahar, Arifah
2015-05-15
This paper proposes a newly developed one-step derivative-free method, that is 2-stage stochastic Runge-Kutta (SRK2) to approximate the solution of stochastic delay differential equations (SDDEs) with a constant time lag, r > 0. General formulation of stochastic Runge-Kutta for SDDEs is introduced and Stratonovich Taylor series expansion for numerical solution of SRK2 is presented. Local truncation error of SRK2 is measured by comparing the Stratonovich Taylor expansion of the exact solution with the computed solution. Numerical experiment is performed to assure the validity of the method in simulating the strong solution of SDDEs.
NASA Astrophysics Data System (ADS)
Smith, Roger J.
2008-10-01
A novel diagnostic technique for the remote and nonperturbative sensing of the local magnetic field in reactor relevant plasmas is presented. Pulsed polarimetry [Patent No. 12/150,169 (pending)] combines optical scattering with the Faraday effect. The polarimetric light detection and ranging (LIDAR)-like diagnostic has the potential to be a local Bpol diagnostic on ITER and can achieve spatial resolutions of millimeters on high energy density (HED) plasmas using existing lasers. The pulsed polarimetry method is based on nonlocal measurements and subtle effects are introduced that are not present in either cw polarimetry or Thomson scattering LIDAR. Important features include the capability of simultaneously measuring local Te, ne, and B∥ along the line of sight, a resiliency to refractive effects, a short measurement duration providing near instantaneous data in time, and location for real-time feedback and control of magnetohydrodynamic (MHD) instabilities and the realization of a widely applicable internal magnetic field diagnostic for the magnetic fusion energy program. The technique improves for higher neB∥ product and higher ne and is well suited for diagnosing the transient plasmas in the HED program. Larger devices such as ITER and DEMO are also better suited to the technique, allowing longer pulse lengths and thereby relaxing key technology constraints making pulsed polarimetry a valuable asset for next step devices. The pulsed polarimetry technique is clarified by way of illustration on the ITER tokamak and plasmas within the magnetized target fusion program within present technological means.
NASA Astrophysics Data System (ADS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.
2009-09-01
The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.
Nutt, John G.; Horak, Fay B.
2011-01-01
Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
NMR polarization echoes in a nematic liquid crystal
NASA Astrophysics Data System (ADS)
Levstein, Patricia R.; Chattah, Ana K.; Pastawski, Horacio M.; Raya, Jésus; Hirschinger, Jérôme
2004-10-01
We have modified the polarization echo (PE) sequence through the incorporation of Lee-Goldburg cross polarization steps to quench the 1H-1H dipolar dynamics. In this way, the 13C becomes an ideal local probe to inject and detect polarization in the proton system. This improvement made possible the observation of the local polarization P00(t) and polarization echoes in the interphenyl proton of the liquid crystal N-(4-methoxybenzylidene)-4-butylaniline. The decay of P00(t) was well fitted to an exponential law with a characteristic time τC≈310 μs. The hierarchy of the intramolecular dipolar couplings determines a dynamical bottleneck that justifies the use of the Fermi Golden Rule to obtain a spectral density consistent with the structural parameters. The time evolution of P00(t) was reversed by the PE sequence generating echoes at the time expected by the scaling of the dipolar Hamiltonian. This indicates that the reversible 1H-1H dipolar interaction is the main contribution to the local polarization decrease and that the exponential decay for P00(t) does not imply irreversibility. The attenuation of the echoes follows a Gaussian law with a characteristic time τφ≈527 μs. The shape and magnitude of the characteristic time of the PE decay suggest that it is dominated by the unperturbed homonuclear dipolar Hamiltonian. This means that τφ is an intrinsic property of the dipolar coupled network and not of other degrees of freedom. In this case, one cannot unambiguously identify the mechanism that produces the decoherence of the dipolar order. This is because even weak interactions are able to break the fragile multiple coherences originated on the dipolar evolution, hindering its reversal. Other schemes to investigate these underlying mechanisms are proposed.
NASA Astrophysics Data System (ADS)
Guthrey, Pierson Tyler
The relativistic Vlasov-Maxwell system (RVM) models the behavior of collisionless plasma, where electrons and ions interact via the electromagnetic fields they generate. In the RVM system, electrons could accelerate to significant fractions of the speed of light. An idea that is actively being pursued by several research groups around the globe is to accelerate electrons to relativistic speeds by hitting a plasma with an intense laser beam. As the laser beam passes through the plasma it creates plasma wakes, much like a ship passing through water, which can trap electrons and push them to relativistic speeds. Such setups are known as laser wakefield accelerators, and have the potential to yield particle accelerators that are significantly smaller than those currently in use. Ultimately, the goal of such research is to harness the resulting electron beams to generate electromagnetic waves that can be used in medical imaging applications. High-order accurate numerical discretizations of kinetic Vlasov plasma models are very effective at yielding low-noise plasma simulations, but are computationally expensive to solve because of the high dimensionality. In addition to the general difficulties inherent to numerically simulating Vlasov models, the relativistic Vlasov-Maxwell system has unique challenges not present in the non-relativistic case. One such issue is that operator splitting of the phase gradient leads to potential instabilities, thus we require an alternative to operator splitting of the phase. The goal of the current work is to develop a new class of high-order accurate numerical methods for solving kinetic Vlasov models of plasma. The main discretization in configuration space is handled via a high-order finite element method called the discontinuous Galerkin method (DG). One difficulty is that standard explicit time-stepping methods for DG suffer from time-step restrictions that are significantly worse than what a simple Courant-Friedrichs-Lewy (CFL) argument requires. The maximum stable time-step scales inversely with the highest degree in the DG polynomial approximation space and becomes progressively smaller with each added spatial dimension. In this work, we overcome this difficulty by introducing a novel time-stepping strategy: the regionally-implicit discontinuous Galerkin (RIDG) method. The RIDG is method is based on an extension of the Lax-Wendroff DG (LxW-DG) method, which previously had been shown to be equivalent (for linear constant coefficient problems) to a predictor-corrector approach, where the prediction is computed by a space-time DG method (STDG). The corrector is an explicit method that uses the space-time reconstructed solution from the predictor step. In this work, we modify the predictor to include not just local information, but also neighboring information. With this modification, we show that the stability is greatly enhanced; we show that we can remove the polynomial degree dependence of the maximum time-step and show vastly improved time-steps in multiple spatial dimensions. Upon the development of the general RIDG method, we apply it to the non-relativistic 1D1V Vlasov-Poisson equations and the relativistic 1D2V Vlasov-Maxwell equations. For each we validate the high-order method on several test cases. In the final test case, we demonstrate the ability of the method to simulate the acceleration of electrons to relativistic speeds in a simplified test case.
Ghanim, Murad; Brumin, Marina; Popovski, Smadar
2009-08-01
A simple, rapid, inexpensive method for the localization of virus transcripts in plant and insect vector tissues is reported here. The method based on fluorescent in situ hybridization using short DNA oligonucleotides complementary to an RNA segment representing a virus transcript in the infected plant or insect vector. The DNA probe harbors a fluorescent molecule at its 5' or 3' ends. The protocol: simple fixation, hybridization, minimal washing and confocal microscopy, provides a highly specific signal. The reliability of the protocol was tested by localizing two phloem-limited plant virus transcripts in infected plants and insect tissues: Tomato yellow leaf curl virus (TYLCV) (Begomovirus: Geminiviridae), exclusively transmitted by the whitefly Bemisia tabaci (Gennadius) in a circulative non-propagative manner, and Potato leafroll virus (Polerovirus: Luteoviridae), similarly transmitted by the aphid Myzus persicae (Sulzer). Transcripts for both viruses were localized specifically to the phloem sieve elements of infected plants, while negative controls showed no signal. TYLCV transcripts were also localized to the digestive tract of B. tabaci, confirming TYLCV route of transmission. Compared to previous methods for localizing virus transcripts in plant and insect tissues that include complex steps for in-vitro probe preparation or antibody raising, tissue fixation, block preparation, sectioning and hybridization, the method described below provides very reliable, convincing, background-free results with much less time, effort and cost.
Roder, D; Davy, M; Selva-Nayagam, S; Gowda, R; Paramasivam, S; Adams, J; Keefe, D; Eckert, M; Powell, K; Fusco, K; Buranyi-Trevarton, D; Oehler, M K
2018-01-01
Registry data on invasive cervical cancers (n = 1,274) from four major hospitals (1984-2012) were analysed to determine their value for informing local service delivery in Australia. The methodology comprised disease-specific survival analyses using Kaplan-Meier product-limit estimates and Cox proportional hazards models and treatment analyses using logistic regression. Five- and 10-year survivals were 72% and 68%, respectively, equating with relative survival estimates for Australia and the USA. Most common treatments were surgery and radiotherapy. Systemic therapies increased in recent years, generally with radiotherapy, but were less common for residents from less accessible areas. Surgery was more common for younger women and early-stage disease, and radiotherapy for older women and regional and more advanced disease. The proportion of glandular cancers increased in-step with national trends. Little evidence of variation in risk-adjusted survival presented over time or by Local Health District. The study illustrates the value of local registry data for describing local treatment and outcomes. They show the lower use of systemic therapies among residents of less accessible areas which warrants further investigation. Risk-adjusted treatment and outcomes did not vary by socio-economic status, suggesting equity in service delivery. These data are important for local evaluation and were not available from other sources. © 2017 John Wiley & Sons Ltd.
Automated selection of brain regions for real-time fMRI brain-computer interfaces
NASA Astrophysics Data System (ADS)
Lührs, Michael; Sorger, Bettina; Goebel, Rainer; Esposito, Fabrizio
2017-02-01
Objective. Brain-computer interfaces (BCIs) implemented with real-time functional magnetic resonance imaging (rt-fMRI) use fMRI time-courses from predefined regions of interest (ROIs). To reach best performances, localizer experiments and on-site expert supervision are required for ROI definition. To automate this step, we developed two unsupervised computational techniques based on the general linear model (GLM) and independent component analysis (ICA) of rt-fMRI data, and compared their performances on a communication BCI. Approach. 3 T fMRI data of six volunteers were re-analyzed in simulated real-time. During a localizer run, participants performed three mental tasks following visual cues. During two communication runs, a letter-spelling display guided the subjects to freely encode letters by performing one of the mental tasks with a specific timing. GLM- and ICA-based procedures were used to decode each letter, respectively using compact ROIs and whole-brain distributed spatio-temporal patterns of fMRI activity, automatically defined from subject-specific or group-level maps. Main results. Letter-decoding performances were comparable to supervised methods. In combination with a similarity-based criterion, GLM- and ICA-based approaches successfully decoded more than 80% (average) of the letters. Subject-specific maps yielded optimal performances. Significance. Automated solutions for ROI selection may help accelerating the translation of rt-fMRI BCIs from research to clinical applications.
Melzer, Itshak; Goldring, Melissa; Melzer, Yehudit; Green, Elad; Tzedek, Irit
2010-12-01
If balance is lost, quick step execution can prevent falls. Research has shown that speed of voluntary stepping was able to predict future falls in old adults. The aim of the study was to investigate voluntary stepping behavior, as well as to compare timing and leg push-off force-time relation parameters of involved and uninvolved legs in stroke survivors during single- and dual-task conditions. We also aimed to compare timing and leg push-off force-time relation parameters between stroke survivors and healthy individuals in both task conditions. Ten stroke survivors performed a voluntary step execution test with their involved and uninvolved legs under two conditions: while focusing only on the stepping task and while a separate attention-demanding task was performed simultaneously. Temporal parameters related to the step time were measured including the duration of the step initiation phase, the preparatory phase, the swing phase, and the total step time. In addition, force-time parameters representing the push-off power during stepping were calculated from ground reaction data and compared with 10 healthy controls. The involved legs of stroke survivors had a significantly slower stepping time than uninvolved legs due to increased swing phase duration during both single- and dual-task conditions. For dual compared to single task, the stepping time increased significantly due to a significant increase in the duration of step initiation. In general, the force time parameters were significantly different in both legs of stroke survivors as compared to healthy controls, with no significant effect of dual compared with single-task conditions in both groups. The inability of stroke survivors to swing the involved leg quickly may be the most significant factor contributing to the large number of falls to the paretic side. The results suggest that stroke survivors were unable to rapidly produce muscle force in fast actions. This may be the mechanism of delayed execution of a fast step when balance is lost, thus increasing the likelihood of falls in stroke survivors. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Desrochers, Johanne; Vermette, Patrick; Fontaine, Réjean; Bérubé-Lauzière, Yves
2008-06-01
Fluorescence optical diffuse tomography (fDOT) is of much interest in molecular imaging to retrieve information from fluorescence signals emitted from specifically targeted bioprocesses deep within living tissues. An exciting application of fDOT is in the growing field of tissue engineering, where 3D non-invasive imaging techniques are required to ultimately grow 3D engineered tissues. Via appropriate labelling strategies and fluorescent probes, fDOT has the potential to monitor culture environment and cells viability non-destructively directly within the bioreactor environment where tissues are to be grown. Our ultimate objective is to image the formation of blood vessels in bioreactor conditions. Herein, we use a non-contact setup for small animal fDOT imaging designed for 3D light collection around the sample. We previously presented a time of flight approach using a numerical constant fraction discrimination technique to assign an early photons arrival time to every fluorescence time point-spread function collected around the sample. Towards bioreactor in-situ imaging, we have shown the capability of our approach to localize a fluorophore-filled 500 μm capillary immersed coaxially in a cylindrically shaped bioreactor phantom containing an absorbing/scattering medium representative of experiments on real tissue cultures. Here, we go one step further, and present results for the 3D localization of thinner indocyanine green labelled capillaries (250 μm and 360 μm inner diameter) immersed in the same phantom conditions and geometry but with different spatial configurations (10° and 30° capillary inclination).
Thermographic and clinical correlation of myofascial trigger points in the masticatory muscles
Haddad, D S; Brioschi, M L; Arita, E S
2012-01-01
Objectives The aim of the study was to identify and correlate myofascial trigger points (MTPs) in the masticatory muscles, using thermography and algometry. Methods 26 female volunteers were recruited. The surface facial area over the masseter and anterior temporalis muscles was divided into 15 subareas on each side (n = 780). This investigation consisted of three steps. The first step involved thermographic facial examination, using lateral views. The second step involved the pressure pain threshold (PPT), marking the MTP pattern areas for referred pain (n = 131) and local pain (n = 282) with a coloured pencil, and a photograph of the lateral face with the head in the same position as the infrared imaging. The last step was the fusion of these two images, using dedicated software (Reporter® 8.5—SP3 Professional Edition and QuickReport® 1.2, FLIR Systems, Wilsonville, OR); and the calculation of the temperature of each point. Results PPT levels measured at the points of referred pain in MTPs (1.28 ± 0.45 kgf) were significantly lower than the points of local pain in MTPs (1.73 ± 0.59 kgf; p < 0.05). Infrared imaging indicated differences between referred and local pain in MTPs of 0.5 °C (p < 0.05). Analysis of the correlation between the PPT and infrared imaging was done using the Spearman non-parametric method, in which the correlations were positive and moderate (0.4 ≤ r < 0.7). The sensitivity and specificity in MTPs were 62.5% and 71.3%, respectively, for referred pain, and 43.6% and 60.6%, respectively, for local pain. Conclusion Infrared imaging measurements can provide a useful, non-invasive and non-ionizing examination for diagnosis of MTPs in masticatory muscles. PMID:23166359