Sample records for particle localization accuracy

  1. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks

    PubMed Central

    Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-01-01

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid “particle degeneracy” problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network. PMID:29267252

  2. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks.

    PubMed

    Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-12-21

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.

  3. Wind Tunnel Seeding Systems for Laser Velocimeters

    NASA Technical Reports Server (NTRS)

    Hunter, W. W., Jr. (Compiler); Nichols, C. E., Jr. (Compiler)

    1985-01-01

    The principal motivating factor for convening the Workshop on the Development and Application of Wind Tunnel Seeding Systems for Laser Velocimeters is the necessity to achieve efficient operation and, most importantly, to insure accurate measurements with velocimeter techniques. The ultimate accuracy of particle scattering based laser velocimeter measurements of wind tunnel flow fields depends on the ability of the scattering particle to faithfully track the local flow field in which it is embedded. A complex relationship exists between the particle motion and the local flow field. This relationship is dependent on particle size, size distribution, shape, and density. To quantify the accuracy of the velocimeter measurements of the flow field, the researcher has to know the scattering particle characteristics. In order to obtain optimum velocimeter measurements, the researcher is striving to achieve control of the particle characteristics and to verify those characteristics at the measurement point. Additionally, the researcher is attempting to achieve maximum measurement efficiency through control of particle concentration and location in the flow field.

  4. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of thesemore » methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.« less

  5. Application of particle splitting method for both hydrostatic and hydrodynamic cases in SPH

    NASA Astrophysics Data System (ADS)

    Liu, W. T.; Sun, P. N.; Ming, F. R.; Zhang, A. M.

    2018-01-01

    Smoothed particle hydrodynamics (SPH) method with numerical diffusive terms shows satisfactory stability and accuracy in some violent fluid-solid interaction problems. However, in most simulations, uniform particle distributions are used and the multi-resolution, which can obviously improve the local accuracy and the overall computational efficiency, has seldom been applied. In this paper, a dynamic particle splitting method is applied and it allows for the simulation of both hydrostatic and hydrodynamic problems. The splitting algorithm is that, when a coarse (mother) particle enters the splitting region, it will be split into four daughter particles, which inherit the physical parameters of the mother particle. In the particle splitting process, conservations of mass, momentum and energy are ensured. Based on the error analysis, the splitting technique is designed to allow the optimal accuracy at the interface between the coarse and refined particles and this is particularly important in the simulation of hydrostatic cases. Finally, the scheme is validated by five basic cases, which demonstrate that the present SPH model with a particle splitting technique is of high accuracy and efficiency and is capable for the simulation of a wide range of hydrodynamic problems.

  6. Improved localization accuracy in stochastic super-resolution fluorescence microscopy by K-factor image deshadowing

    PubMed Central

    Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G.; Menon, Rajesh; Gerton, Jordan M.; Jorgensen, Erik M.; Zalevsky, Zeev

    2013-01-01

    Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution. PMID:24466491

  7. Analysis and improvements of Adaptive Particle Refinement (APR) through CPU time, accuracy and robustness considerations

    NASA Astrophysics Data System (ADS)

    Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.

    2018-02-01

    While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.

  8. Moving Object Localization Based on UHF RFID Phase and Laser Clustering

    PubMed Central

    Fu, Yulu; Wang, Changlong; Liang, Gaoli; Zhang, Hua; Ur Rehman, Shafiq

    2018-01-01

    RFID (Radio Frequency Identification) offers a way to identify objects without any contact. However, positioning accuracy is limited since RFID neither provides distance nor bearing information about the tag. This paper proposes a new and innovative approach for the localization of moving object using a particle filter by incorporating RFID phase and laser-based clustering from 2d laser range data. First of all, we calculate phase-based velocity of the moving object based on RFID phase difference. Meanwhile, we separate laser range data into different clusters, and compute the distance-based velocity and moving direction of these clusters. We then compute and analyze the similarity between two velocities, and select K clusters having the best similarity score. We predict the particles according to the velocity and moving direction of laser clusters. Finally, we update the weights of the particles based on K clusters and achieve the localization of moving objects. The feasibility of this approach is validated on a Scitos G5 service robot and the results prove that we have successfully achieved a localization accuracy up to 0.25 m. PMID:29522458

  9. Research of converter transformer fault diagnosis based on improved PSO-BP algorithm

    NASA Astrophysics Data System (ADS)

    Long, Qi; Guo, Shuyong; Li, Qing; Sun, Yong; Li, Yi; Fan, Youping

    2017-09-01

    To overcome those disadvantages that BP (Back Propagation) neural network and conventional Particle Swarm Optimization (PSO) converge at the global best particle repeatedly in early stage and is easy trapped in local optima and with low diagnosis accuracy when being applied in converter transformer fault diagnosis, we come up with the improved PSO-BP neural network to improve the accuracy rate. This algorithm improves the inertia weight Equation by using the attenuation strategy based on concave function to avoid the premature convergence of PSO algorithm and Time-Varying Acceleration Coefficient (TVAC) strategy was adopted to balance the local search and global search ability. At last the simulation results prove that the proposed approach has a better ability in optimizing BP neural network in terms of network output error, global searching performance and diagnosis accuracy.

  10. Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter.

    PubMed

    Sung, Kwangjae; Lee, Dong Kyu 'Roy'; Kim, Hwangnam

    2018-05-26

    The reliable and accurate indoor pedestrian positioning is one of the biggest challenges for location-based systems and applications. Most pedestrian positioning systems have drift error and large bias due to low-cost inertial sensors and random motions of human being, as well as unpredictable and time-varying radio-frequency (RF) signals used for position determination. To solve this problem, many indoor positioning approaches that integrate the user's motion estimated by dead reckoning (DR) method and the location data obtained by RSS fingerprinting through Bayesian filter, such as the Kalman filter (KF), unscented Kalman filter (UKF), and particle filter (PF), have recently been proposed to achieve higher positioning accuracy in indoor environments. Among Bayesian filtering methods, PF is the most popular integrating approach and can provide the best localization performance. However, since PF uses a large number of particles for the high performance, it can lead to considerable computational cost. This paper presents an indoor positioning system implemented on a smartphone, which uses simple dead reckoning (DR), RSS fingerprinting using iBeacon and machine learning scheme, and improved KF. The core of the system is the enhanced KF called a sigma-point Kalman particle filter (SKPF), which localize the user leveraging both the unscented transform of UKF and the weighting method of PF. The SKPF algorithm proposed in this study is used to provide the enhanced positioning accuracy by fusing positional data obtained from both DR and fingerprinting with uncertainty. The SKPF algorithm can achieve better positioning accuracy than KF and UKF and comparable performance compared to PF, and it can provide higher computational efficiency compared with PF. iBeacon in our positioning system is used for energy-efficient localization and RSS fingerprinting. We aim to design the localization scheme that can realize the high positioning accuracy, computational efficiency, and energy efficiency through the SKPF and iBeacon indoors. Empirical experiments in real environments show that the use of the SKPF algorithm and iBeacon in our indoor localization scheme can achieve very satisfactory performance in terms of localization accuracy, computational cost, and energy efficiency.

  11. A Statistical Examination of Magnetic Field Model Accuracy for Mapping Geosynchronous Solar Energetic Particle Observations to Lower Earth Orbits

    NASA Astrophysics Data System (ADS)

    Young, S. L.; Kress, B. T.; Rodriguez, J. V.; McCollough, J. P.

    2013-12-01

    Operational specifications of space environmental hazards can be an important input used by decision makers. Ideally the specification would come from on-board sensors, but for satellites where that capability is not available another option is to map data from remote observations to the location of the satellite. This requires a model of the physical environment and an understanding of its accuracy for mapping applications. We present a statistical comparison between magnetic field model mappings of solar energetic particle observations made by NOAA's Geostationary Operational Environmental Satellites (GOES) to the location of the Combined Release and Radiation Effects Satellite (CRRES). Because CRRES followed a geosynchronous transfer orbit which precessed in local time this allows us to examine the model accuracy between LEO and GEO orbits across a range of local times. We examine the accuracy of multiple magnetic field models using a variety of statistics and examine their utility for operational purposes.

  12. Stochastic localization of microswimmers by photon nudging.

    PubMed

    Bregulla, Andreas P; Yang, Haw; Cichos, Frank

    2014-07-22

    Force-free trapping and steering of single photophoretically self-propelled Janus-type particles using a feedback mechanism is experimentally demonstrated. Realtime information on particle position and orientation is used to switch the self-propulsion mechanism of the particle optically. The orientational Brownian motion of the particle thereby provides the reorientation mechanism for the microswimmer. The particle size dependence of the photophoretic propulsion velocity reveals that photon nudging provides an increased position accuracy for decreasing particle radius. The explored steering mechanism is suitable for navigation in complex biological environments and in-depth studies of collective swimming effects.

  13. A hybridized discontinuous Galerkin framework for high-order particle-mesh operator splitting of the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Maljaars, Jakob M.; Labeur, Robert Jan; Möller, Matthias

    2018-04-01

    A generic particle-mesh method using a hybridized discontinuous Galerkin (HDG) framework is presented and validated for the solution of the incompressible Navier-Stokes equations. Building upon particle-in-cell concepts, the method is formulated in terms of an operator splitting technique in which Lagrangian particles are used to discretize an advection operator, and an Eulerian mesh-based HDG method is employed for the constitutive modeling to account for the inter-particle interactions. Key to the method is the variational framework provided by the HDG method. This allows to formulate the projections between the Lagrangian particle space and the Eulerian finite element space in terms of local (i.e. cellwise) ℓ2-projections efficiently. Furthermore, exploiting the HDG framework for solving the constitutive equations results in velocity fields which excellently approach the incompressibility constraint in a local sense. By advecting the particles through these velocity fields, the particle distribution remains uniform over time, obviating the need for additional quality control. The presented methodology allows for a straightforward extension to arbitrary-order spatial accuracy on general meshes. A range of numerical examples shows that optimal convergence rates are obtained in space and, given the particular time stepping strategy, second-order accuracy is obtained in time. The model capabilities are further demonstrated by presenting results for the flow over a backward facing step and for the flow around a cylinder.

  14. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  15. Novel joint TOA/RSSI-based WCE location tracking method without prior knowledge of biological human body tissues.

    PubMed

    Ito, Takahiro; Anzai, Daisuke; Jianqing Wang

    2014-01-01

    This paper proposes a novel joint time of arrival (TOA)/received signal strength indicator (RSSI)-based wireless capsule endoscope (WCE) location tracking method without prior knowledge of biological human tissues. Generally, TOA-based localization can achieve much higher localization accuracy than other radio frequency-based localization techniques, whereas wireless signals transmitted from a WCE pass through various kinds of human body tissues, as a result, the propagation velocity inside a human body should be different from one in free space. Because the variation of propagation velocity is mainly affected by the relative permittivity of human body tissues, instead of pre-measurement for the relative permittivity in advance, we simultaneously estimate not only the WCE location but also the relative permittivity information. For this purpose, this paper first derives the relative permittivity estimation model with measured RSSI information. Then, we pay attention to a particle filter algorithm with the TOA-based localization and the RSSI-based relative permittivity estimation. Our computer simulation results demonstrates that the proposed tracking methods with the particle filter can accomplish an excellent localization accuracy of around 2 mm without prior information of the relative permittivity of the human body tissues.

  16. SPH with dynamical smoothing length adjustment based on the local flow kinematics

    NASA Astrophysics Data System (ADS)

    Olejnik, Michał; Szewc, Kamil; Pozorski, Jacek

    2017-11-01

    Due to the Lagrangian nature of Smoothed Particle Hydrodynamics (SPH), the adaptive resolution remains a challenging task. In this work, we first analyse the influence of the simulation parameters and the smoothing length on solution accuracy, in particular in high strain regions. Based on this analysis we develop a novel approach to dynamically adjust the kernel range for each SPH particle separately, accounting for the local flow kinematics. We use the Okubo-Weiss parameter that distinguishes the strain and vorticity dominated regions in the flow domain. The proposed development is relatively simple and implies only a moderate computational overhead. We validate the modified SPH algorithm for a selection of two-dimensional test cases: the Taylor-Green flow, the vortex spin-down, the lid-driven cavity and the dam-break flow against a sharp-edged obstacle. The simulation results show good agreement with the reference data and improvement of the long-term accuracy for unsteady flows. For the lid-driven cavity case, the proposed dynamical adjustment remedies the problem of tensile instability (particle clustering).

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    LAGASSE,ROBERT R.; THOMPSON,KYLE R.

    The goal of this work is to develop techniques for measuring gradients in particle concentration within filled polymers, such as encapsulant. A high concentration of filler particles is added to such materials to tailor physical properties such as thermal expansion coefficient. Sedimentation and flow-induced migration of particles can produce concentration gradients that are most severe near material boundaries. Therefore, techniques for measuring local particle concentration should be accurate near boundaries. Particle gradients in an alumina-filled epoxy resin are measured with a spatial resolution of 0.2 mm using an x-ray beam attenuation technique, but an artifact related to the finite diametermore » of the beam reduces accuracy near the specimen's edge. Local particle concentration near an edge can be measured more reliably using microscopy coupled with image analysis. This is illustrated by measuring concentration profiles of glass particles having 40 {micro}m median diameter using images acquired by a confocal laser fluorescence microscope. The mean of the measured profiles of volume fraction agrees to better than 3% with the expected value, and the shape of the profiles agrees qualitatively with simple theory for sedimentation of monodisperse particles. Extending this microscopy technique to smaller, micron-scale filler particles used in encapsulant for microelectronic devices is illustrated by measuring the local concentration of an epoxy resin containing 0.41 volume fraction of silica.« less

  18. Quasi-particle energy spectra in local reduced density matrix functional theory.

    PubMed

    Lathiotakis, Nektarios N; Helbig, Nicole; Rubio, Angel; Gidopoulos, Nikitas I

    2014-10-28

    Recently, we introduced [N. N. Lathiotakis, N. Helbig, A. Rubio, and N. I. Gidopoulos, Phys. Rev. A 90, 032511 (2014)] local reduced density matrix functional theory (local RDMFT), a theoretical scheme capable of incorporating static correlation effects in Kohn-Sham equations. Here, we apply local RDMFT to molecular systems of relatively large size, as a demonstration of its computational efficiency and its accuracy in predicting single-electron properties from the eigenvalue spectrum of the single-particle Hamiltonian with a local effective potential. We present encouraging results on the photoelectron spectrum of molecular systems and the relative stability of C20 isotopes. In addition, we propose a modelling of the fractional occupancies as functions of the orbital energies that further improves the efficiency of the method useful in applications to large systems and solids.

  19. Rapid, topology-based particle tracking for high-resolution measurements of large complex 3D motion fields.

    PubMed

    Patel, Mohak; Leggett, Susan E; Landauer, Alexander K; Wong, Ian Y; Franck, Christian

    2018-04-03

    Spatiotemporal tracking of tracer particles or objects of interest can reveal localized behaviors in biological and physical systems. However, existing tracking algorithms are most effective for relatively low numbers of particles that undergo displacements smaller than their typical interparticle separation distance. Here, we demonstrate a single particle tracking algorithm to reconstruct large complex motion fields with large particle numbers, orders of magnitude larger than previously tractably resolvable, thus opening the door for attaining very high Nyquist spatial frequency motion recovery in the images. Our key innovations are feature vectors that encode nearest neighbor positions, a rigorous outlier removal scheme, and an iterative deformation warping scheme. We test this technique for its accuracy and computational efficacy using synthetically and experimentally generated 3D particle images, including non-affine deformation fields in soft materials, complex fluid flows, and cell-generated deformations. We augment this algorithm with additional particle information (e.g., color, size, or shape) to further enhance tracking accuracy for high gradient and large displacement fields. These applications demonstrate that this versatile technique can rapidly track unprecedented numbers of particles to resolve large and complex motion fields in 2D and 3D images, particularly when spatial correlations exist.

  20. Ultrahigh-order Maxwell solver with extreme scalability for electromagnetic PIC simulations of plasmas

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri; Vay, Jean-Luc

    2018-07-01

    The advent of massively parallel supercomputers, with their distributed-memory technology using many processing units, has favored the development of highly-scalable local low-order solvers at the expense of harder-to-scale global very high-order spectral methods. Indeed, FFT-based methods, which were very popular on shared memory computers, have been largely replaced by finite-difference (FD) methods for the solution of many problems, including plasmas simulations with electromagnetic Particle-In-Cell methods. For some problems, such as the modeling of so-called "plasma mirrors" for the generation of high-energy particles and ultra-short radiations, we have shown that the inaccuracies of standard FD-based PIC methods prevent the modeling on present supercomputers at sufficient accuracy. We demonstrate here that a new method, based on the use of local FFTs, enables ultrahigh-order accuracy with unprecedented scalability, and thus for the first time the accurate modeling of plasma mirrors in 3D.

  1. Drift correction of the dissolved signal in single particle ICPMS.

    PubMed

    Cornelis, Geert; Rauch, Sebastien

    2016-07-01

    A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.

  2. Visual Tracking via Sparse and Local Linear Coding.

    PubMed

    Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan

    2015-11-01

    The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.

  3. Quantum effects of nuclear motion in three-particle diatomic ions

    NASA Astrophysics Data System (ADS)

    Baskerville, Adam L.; King, Andrew W.; Cox, Hazel

    2016-10-01

    A high-accuracy, nonrelativistic wave function is used to study nuclear motion in the ground state of three-particle {a1+a2+a3-} electronic and muonic molecular systems without assuming the Born-Oppenheimer approximation. Intracule densities and center-of-mass particle densities show that as the mass ratio mai/ma3 , i =1 ,2 , becomes smaller, the localization of the like-charged particles (nuclei) a1 and a2 decreases. A coordinate system is presented to calculate center-of-mass particle densities for systems where a1≠a2 . It is shown that the nuclear motion is strongly correlated and depends on the relative masses of the nuclei a1 and a2 rather than just their absolute mass. The heavier particle is always more localized and the lighter the partner mass, the greater the localization. It is shown, for systems with ma1

  4. A solution algorithm for fluid-particle flows across all flow regimes

    NASA Astrophysics Data System (ADS)

    Kong, Bo; Fox, Rodney O.

    2017-09-01

    Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.

  5. A solution algorithm for fluid–particle flows across all flow regimes

    DOE PAGES

    Kong, Bo; Fox, Rodney O.

    2017-05-12

    Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less

  6. A solution algorithm for fluid–particle flows across all flow regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kong, Bo; Fox, Rodney O.

    Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less

  7. The constant displacement scheme for tracking particles in heterogeneous aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, X.H.; Gomez-Hernandez, J.J.

    1996-01-01

    Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times fastermore » than using the constant time step scheme.« less

  8. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  9. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  10. Comprehensive and Practical Vision System for Self-Driving Vehicle Lane-Level Localization.

    PubMed

    Du, Xinxin; Tan, Kok Kiong

    2016-05-01

    Vehicle lane-level localization is a fundamental technology in autonomous driving. To achieve accurate and consistent performance, a common approach is to use the LIDAR technology. However, it is expensive and computational demanding, and thus not a practical solution in many situations. This paper proposes a stereovision system, which is of low cost, yet also able to achieve high accuracy and consistency. It integrates a new lane line detection algorithm with other lane marking detectors to effectively identify the correct lane line markings. It also fits multiple road models to improve accuracy. An effective stereo 3D reconstruction method is proposed to estimate vehicle localization. The estimation consistency is further guaranteed by a new particle filter framework, which takes vehicle dynamics into account. Experiment results based on image sequences taken under different visual conditions showed that the proposed system can identify the lane line markings with 98.6% accuracy. The maximum estimation error of the vehicle distance to lane lines is 16 cm in daytime and 26 cm at night, and the maximum estimation error of its moving direction with respect to the road tangent is 0.06 rad in daytime and 0.12 rad at night. Due to its high accuracy and consistency, the proposed system can be implemented in autonomous driving vehicles as a practical solution to vehicle lane-level localization.

  11. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications.

    PubMed

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-08-06

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

  12. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications

    PubMed Central

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-01-01

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495

  13. Modeling particle number concentrations along Interstate 10 in El Paso, Texas

    PubMed Central

    Olvera, Hector A.; Jimenez, Omar; Provencio-Vasquez, Elias

    2014-01-01

    Annual average daily particle number concentrations around a highway were estimated with an atmospheric dispersion model and a land use regression model. The dispersion model was used to estimate particle concentrations along Interstate 10 at 98 locations within El Paso, Texas. This model employed annual averaged wind speed and annual average daily traffic counts as inputs. A land use regression model with vehicle kilometers traveled as the predictor variable was used to estimate local background concentrations away from the highway to adjust the near-highway concentration estimates. Estimated particle number concentrations ranged between 9.8 × 103 particles/cc and 1.3 × 105 particles/cc, and averaged 2.5 × 104 particles/cc (SE 421.0). Estimates were compared against values measured at seven sites located along I10 throughout the region. The average fractional error was 6% and ranged between -1% and -13% across sites. The largest bias of -13% was observed at a semi-rural site where traffic was lowest. The average bias amongst urban sites was 5%. The accuracy of the estimates depended primarily on the emission factor and the adjustment to local background conditions. An emission factor of 1.63 × 1014 particles/veh-km was based on a value proposed in the literature and adjusted with local measurements. The integration of the two modeling techniques ensured that the particle number concentrations estimates captured the impact of traffic along both the highway and arterial roadways. The performance and economical aspects of the two modeling techniques used in this study shows that producing particle concentration surfaces along major roadways would be feasible in urban regions where traffic and meteorological data are readily available. PMID:25313294

  14. Particle Streak Anemometry: A New Method for Proximal Flow Sensing from Aircraft

    NASA Astrophysics Data System (ADS)

    Nichols, T. W.

    Accurate sensing of relative air flow direction from fixed-wing small unmanned aircraft (sUAS) is challenging with existing multi-hole pitot-static and vane systems. Sub-degree direction accuracy is generally not available on such systems and disturbances to the local flow field, induced by the airframe, introduce an additional error source. An optical imaging approach to make a relative air velocity measurement with high-directional accuracy is presented. Optical methods offer the capability to make a proximal measurement in undisturbed air outside of the local flow field without the need to place sensors on vulnerable probes extended ahead of the aircraft. Current imaging flow analysis techniques for laboratory use rely on relatively thin imaged volumes and sophisticated hardware and intensity thresholding in low-background conditions. A new method is derived and assessed using a particle streak imaging technique that can be implemented with low-cost commercial cameras and illumination systems, and can function in imaged volumes of arbitrary depth with complex background signal. The new technique, referred to as particle streak anemometry (PSA) (to differentiate from particle streak velocimetry which makes a field measurement rather than a single bulk flow measurement) utilizes a modified Canny Edge detection algorithm with a connected component analysis and principle component analysis to detect streak ends in complex imaging conditions. A linear solution for the air velocity direction is then implemented with a random sample consensus (RANSAC) solution approach. A single DOF non-linear, non-convex optimization problem is then solved for the air speed through an iterative approach. The technique was tested through simulation and wind tunnel tests yielding angular accuracies under 0.2 degrees, superior to the performance of existing commercial systems. Air speed error standard deviations varied from 1.6 to 2.2 m/s depending on the techniques of implementation. While air speed sensing is secondary to accurate flow direction measurement, the air speed results were in line with commercial pitot static systems at low speeds.

  15. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  16. A multi-dimensional, energy- and charge-conserving, nonlinearly implicit, electromagnetic Vlasov–Darwin particle-in-cell algorithm

    DOE PAGES

    Chen, G.; Chacón, L.

    2015-08-11

    For decades, the Vlasov–Darwin model has been recognized to be attractive for particle-in-cell (PIC) kinetic plasma simulations in non-radiative electromagnetic regimes, to avoid radiative noise issues and gain computational efficiency. However, the Darwin model results in an elliptic set of field equations that renders conventional explicit time integration unconditionally unstable. We explore a fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions, which overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. The finite-difference scheme for Darwin field equations and particle equations of motion is space–time-centered, employing particle sub-cycling and orbit-averaging. This algorithm conserves total energy, local charge,more » canonical-momentum in the ignorable direction, and preserves the Coulomb gauge exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. Finally, we demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 2D–3V.« less

  17. Crack identification method in beam-like structures using changes in experimentally measured frequencies and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Khatir, Samir; Dekemele, Kevin; Loccufier, Mia; Khatir, Tawfiq; Abdel Wahab, Magd

    2018-02-01

    In this paper, a technique is presented for the detection and localization of an open crack in beam-like structures using experimentally measured natural frequencies and the Particle Swarm Optimization (PSO) method. The technique considers the variation in local flexibility near the crack. The natural frequencies of a cracked beam are determined experimentally and numerically using the Finite Element Method (FEM). The optimization algorithm is programmed in MATLAB. The algorithm is used to estimate the location and severity of a crack by minimizing the differences between measured and calculated frequencies. The method is verified using experimentally measured data on a cantilever steel beam. The Fourier transform is adopted to improve the frequency resolution. The results demonstrate the good accuracy of the proposed technique.

  18. Intensity-enhanced MART for tomographic PIV

    NASA Astrophysics Data System (ADS)

    Wang, HongPing; Gao, Qi; Wei, RunJie; Wang, JinJun

    2016-05-01

    A novel technique to shrink the elongated particles and suppress the ghost particles in particle reconstruction of tomographic particle image velocimetry is presented. This method, named as intensity-enhanced multiplicative algebraic reconstruction technique (IntE-MART), utilizes an inverse diffusion function and an intensity suppressing factor to improve the quality of particle reconstruction and consequently the precision of velocimetry. A numerical assessment about vortex ring motion with and without image noise is performed to evaluate the new algorithm in terms of reconstruction, particle elongation and velocimetry. The simulation is performed at seven different seeding densities. The comparison of spatial filter MART and IntE-MART on the probability density function of particle peak intensity suggests that one of the local minima of the distribution can be used to separate the ghosts and actual particles. Thus, ghost removal based on IntE-MART is also introduced. To verify the application of IntE-MART, a real plate turbulent boundary layer experiment is performed. The result indicates that ghost reduction can increase the accuracy of RMS of velocity field.

  19. Locally adaptive methods for KDE-based random walk models of reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2017-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  20. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    PubMed Central

    Wang, Jie-Sheng; Han, Shuang

    2015-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  2. Numerical Simulation of Transitional, Hypersonic Flows using a Hybrid Particle-Continuum Method

    NASA Astrophysics Data System (ADS)

    Verhoff, Ashley Marie

    Analysis of hypersonic flows requires consideration of multiscale phenomena due to the range of flight regimes encountered, from rarefied conditions in the upper atmosphere to fully continuum flow at low altitudes. At transitional Knudsen numbers there are likely to be localized regions of strong thermodynamic nonequilibrium effects that invalidate the continuum assumptions of the Navier-Stokes equations. Accurate simulation of these regions, which include shock waves, boundary and shear layers, and low-density wakes, requires a kinetic theory-based approach where no prior assumptions are made regarding the molecular distribution function. Because of the nature of these types of flows, there is much to be gained in terms of both numerical efficiency and physical accuracy by developing hybrid particle-continuum simulation approaches. The focus of the present research effort is the continued development of the Modular Particle-Continuum (MPC) method, where the Navier-Stokes equations are solved numerically using computational fluid dynamics (CFD) techniques in regions of the flow field where continuum assumptions are valid, and the direct simulation Monte Carlo (DSMC) method is used where strong thermodynamic nonequilibrium effects are present. Numerical solutions of transitional, hypersonic flows are thus obtained with increased physical accuracy relative to CFD alone, and improved numerical efficiency is achieved in comparison to DSMC alone because this more computationally expensive method is restricted to those regions of the flow field where it is necessary to maintain physical accuracy. In this dissertation, a comprehensive assessment of the physical accuracy of the MPC method is performed, leading to the implementation of a non-vacuum supersonic outflow boundary condition in particle domains, and more consistent initialization of DSMC simulator particles along hybrid interfaces. The relative errors between MPC and full DSMC results are greatly reduced as a direct result of these improvements. Next, a new parameter for detecting rotational nonequilibrium effects is proposed and shown to offer advantages over other continuum breakdown parameters, achieving further accuracy gains. Lastly, the capabilities of the MPC method are extended to accommodate multiple chemical species in rotational nonequilibrium, each of which is allowed to equilibrate independently, enabling application of the MPC method to more realistic atmospheric flows.

  3. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment.

    PubMed

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-05-13

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.

  4. Nonlocal screening in metal surfaces

    NASA Technical Reports Server (NTRS)

    Krotscheck, E.; Kohn, W.

    1986-01-01

    Due to the effect of the nonuniform environment on the static screening of the Coulomb potential, the local-density approximation for the particle-hole interaction is found to be inadequate to determine the surface energy of simple metals. Use of the same set of single-particle states, and thus the same one-body density and the same work function, has eliminated the single-electron states in favor of the structure of the short-ranged correlations as the basis of this effect. A posteriori simplifications of the Fermi hypernetted-chain theory may be found to allow the same calculational accuracy with simpler computational tools.

  5. Decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio.

    PubMed

    Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan

    2016-10-01

    A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.

  6. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach.

    PubMed

    Liu, Mengyun; Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng; Pan, Yuanjin

    2017-12-08

    After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to "see" which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas.

  7. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach

    PubMed Central

    Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng

    2017-01-01

    After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas. PMID:29292761

  8. Noiseless Vlasov-Poisson simulations with linearly transformed particles

    DOE PAGES

    Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...

    2014-06-25

    We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less

  9. Super-resolution and super-localization microscopy: A novel tool for imaging chemical and biological processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Bin

    2015-01-01

    Optical microscopy imaging of single molecules and single particles is an essential method for studying fundamental biological and chemical processes at the molecular and nanometer scale. The best spatial resolution (~ λ/2) achievable in traditional optical microscopy is governed by the diffraction of light. However, single molecule-based super-localization and super-resolution microscopy imaging techniques have emerged in the past decade. Individual molecules can be localized with nanometer scale accuracy and precision for studying of biological and chemical processes.This work uncovered the heterogeneous properties of the pore structures. In this dissertation, the coupling of molecular transport and catalytic reaction at the singlemore » molecule and single particle level in multilayer mesoporous nanocatalysts was elucidated. Most previous studies dealt with these two important phenomena separately. A fluorogenic oxidation reaction of non-fluorescent amplex red to highly fluorescent resorufin was tested. The diffusion behavior of single resorufin molecules in aligned nanopores was studied using total internal reflection fluorescence microscopy (TIRFM).« less

  10. An electrostatic Particle-In-Cell code on multi-block structured meshes

    NASA Astrophysics Data System (ADS)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; Vernon, Louis J.; Moulton, J. David

    2017-12-01

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. Despite the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where an arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma-material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. Compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.

  11. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE PAGES

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; ...

    2017-09-14

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  12. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  13. Direct numerical simulation of particulate flows with an overset grid method

    NASA Astrophysics Data System (ADS)

    Koblitz, A. R.; Lovett, S.; Nikiforakis, N.; Henshaw, W. D.

    2017-08-01

    We evaluate an efficient overset grid method for two-dimensional and three-dimensional particulate flows for small numbers of particles at finite Reynolds number. The rigid particles are discretised using moving overset grids overlaid on a Cartesian background grid. This allows for strongly-enforced boundary conditions and local grid refinement at particle surfaces, thereby accurately capturing the viscous boundary layer at modest computational cost. The incompressible Navier-Stokes equations are solved with a fractional-step scheme which is second-order-accurate in space and time, while the fluid-solid coupling is achieved with a partitioned approach including multiple sub-iterations to increase stability for light, rigid bodies. Through a series of benchmark studies we demonstrate the accuracy and efficiency of this approach compared to other boundary conformal and static grid methods in the literature. In particular, we find that fully resolving boundary layers at particle surfaces is crucial to obtain accurate solutions to many common test cases. With our approach we are able to compute accurate solutions using as little as one third the number of grid points as uniform grid computations in the literature. A detailed convergence study shows a 13-fold decrease in CPU time over a uniform grid test case whilst maintaining comparable solution accuracy.

  14. Fusion of WiFi, smartphone sensors and landmarks using the Kalman filter for indoor localization.

    PubMed

    Chen, Zhenghua; Zou, Han; Jiang, Hao; Zhu, Qingchang; Soh, Yeng Chai; Xie, Lihua

    2015-01-05

    Location-based services (LBS) have attracted a great deal of attention recently. Outdoor localization can be solved by the GPS technique, but how to accurately and efficiently localize pedestrians in indoor environments is still a challenging problem. Recent techniques based on WiFi or pedestrian dead reckoning (PDR) have several limiting problems, such as the variation of WiFi signals and the drift of PDR. An auxiliary tool for indoor localization is landmarks, which can be easily identified based on specific sensor patterns in the environment, and this will be exploited in our proposed approach. In this work, we propose a sensor fusion framework for combining WiFi, PDR and landmarks. Since the whole system is running on a smartphone, which is resource limited, we formulate the sensor fusion problem in a linear perspective, then a Kalman filter is applied instead of a particle filter, which is widely used in the literature. Furthermore, novel techniques to enhance the accuracy of individual approaches are adopted. In the experiments, an Android app is developed for real-time indoor localization and navigation. A comparison has been made between our proposed approach and individual approaches. The results show significant improvement using our proposed framework. Our proposed system can provide an average localization accuracy of 1 m.

  15. Fusion of WiFi, Smartphone Sensors and Landmarks Using the Kalman Filter for Indoor Localization

    PubMed Central

    Chen, Zhenghua; Zou, Han; Jiang, Hao; Zhu, Qingchang; Soh, Yeng Chai; Xie, Lihua

    2015-01-01

    Location-based services (LBS) have attracted a great deal of attention recently. Outdoor localization can be solved by the GPS technique, but how to accurately and efficiently localize pedestrians in indoor environments is still a challenging problem. Recent techniques based on WiFi or pedestrian dead reckoning (PDR) have several limiting problems, such as the variation of WiFi signals and the drift of PDR. An auxiliary tool for indoor localization is landmarks, which can be easily identified based on specific sensor patterns in the environment, and this will be exploited in our proposed approach. In this work, we propose a sensor fusion framework for combining WiFi, PDR and landmarks. Since the whole system is running on a smartphone, which is resource limited, we formulate the sensor fusion problem in a linear perspective, then a Kalman filter is applied instead of a particle filter, which is widely used in the literature. Furthermore, novel techniques to enhance the accuracy of individual approaches are adopted. In the experiments, an Android app is developed for real-time indoor localization and navigation. A comparison has been made between our proposed approach and individual approaches. The results show significant improvement using our proposed framework. Our proposed system can provide an average localization accuracy of 1 m. PMID:25569750

  16. Hybrid-DFT  +  V w method for band structure calculation of semiconducting transition metal compounds: the case of cerium dioxide.

    PubMed

    Ivády, Viktor; Gali, Adam; Abrikosov, Igor A

    2017-11-15

    Hybrid functionals' non-local exchange-correlation potential contains a derivative discontinuity that improves on standard semi-local density functional theory (DFT) band gaps. Moreover, by careful parameterization, hybrid functionals can provide self-interaction reduced description of selected states. On the other hand, the uniform description of all the electronic states of a given system is a known drawback of these functionals that causes varying accuracy in the description of states with different degrees of localization. This limitation can be remedied by the orbital dependent exact exchange extension of hybrid functionals; the hybrid-DFT  +  V w method (Ivády et al 2014 Phys. Rev. B 90 035146). Based on the analogy of quasi-particle equations and hybrid-DFT single particle equations, here we demonstrate that parameters of hybrid-DFT  +  V w functional can be determined from approximate theoretical quasi-particle spectra without any fitting to experiment. The proposed method is illustrated on the charge self-consistent electronic structure calculation for cerium dioxide where itinerant valence states interact with well-localized 4f atomic like states, making this system challenging for conventional methods, either hybrid-DFT or LDA  +  U, and therefore allowing for a demonstration of the advantages of the proposed scheme.

  17. Adaptive optics stochastic optical reconstruction microscopy (AO-STORM) by particle swarm optimization

    PubMed Central

    Tehrani, Kayvan F.; Zhang, Yiwen; Shen, Ping; Kner, Peter

    2017-01-01

    Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm. PMID:29188105

  18. Adaptive optics stochastic optical reconstruction microscopy (AO-STORM) by particle swarm optimization.

    PubMed

    Tehrani, Kayvan F; Zhang, Yiwen; Shen, Ping; Kner, Peter

    2017-11-01

    Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm.

  19. A Localization Method for Underwater Wireless Sensor Networks Based on Mobility Prediction and Particle Swarm Optimization Algorithms

    PubMed Central

    Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei

    2016-01-01

    Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field. PMID:26861348

  20. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment

    PubMed Central

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-01-01

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405

  1. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  2. On the angular error of intensity vector based direction of arrival estimation in reverberant sound fields.

    PubMed

    Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon

    2010-10-01

    An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.

  3. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT

    PubMed Central

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of “premature convergence,” that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment. PMID:29181020

  4. APFiLoc: An Infrastructure-Free Indoor Localization Method Fusing Smartphone Inertial Sensors, Landmarks and Map Information

    PubMed Central

    Shang, Jianga; Gu, Fuqiang; Hu, Xuke; Kealy, Allison

    2015-01-01

    The utility and adoption of indoor localization applications have been limited due to the complex nature of the physical environment combined with an increasing requirement for more robust localization performance. Existing solutions to this problem are either too expensive or too dependent on infrastructure such as Wi-Fi access points. To address this problem, we propose APFiLoc—a low cost, smartphone-based framework for indoor localization. The key idea behind this framework is to obtain landmarks within the environment and to use the augmented particle filter to fuse them with measurements from smartphone sensors and map information. A clustering method based on distance constraints is developed to detect organic landmarks in an unsupervised way, and the least square support vector machine is used to classify seed landmarks. A series of real-world experiments were conducted in complex environments including multiple floors and the results show APFiLoc can achieve 80% accuracy (phone in the hand) and around 70% accuracy (phone in the pocket) of the error less than 2 m error without the assistance of infrastructure like Wi-Fi access points. PMID:26516858

  5. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT.

    PubMed

    Nie, Xiaohua; Wang, Wei; Nie, Haoyao

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of "premature convergence," that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment.

  6. Lagrangian particle method for compressible fluid dynamics

    NASA Astrophysics Data System (ADS)

    Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang

    2018-06-01

    A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.

  7. Assessment of Different Discrete Particle Methods Ability To Predict Gas-Particle Flow in a Small-Scale Fluidized Bed

    DOE PAGES

    Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane

    2017-06-21

    Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less

  8. Assessment of Different Discrete Particle Methods Ability To Predict Gas-Particle Flow in a Small-Scale Fluidized Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane

    Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less

  9. Multiple local feature representations and their fusion based on an SVR model for iris recognition using optimized Gabor filters

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing

    2014-12-01

    Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.

  10. Particle Swarm Optimization With Interswarm Interactive Learning Strategy.

    PubMed

    Qin, Quande; Cheng, Shi; Zhang, Qingyu; Li, Li; Shi, Yuhui

    2016-10-01

    The learning strategy in the canonical particle swarm optimization (PSO) algorithm is often blamed for being the primary reason for loss of diversity. Population diversity maintenance is crucial for preventing particles from being stuck into local optima. In this paper, we present an improved PSO algorithm with an interswarm interactive learning strategy (IILPSO) by overcoming the drawbacks of the canonical PSO algorithm's learning strategy. IILPSO is inspired by the phenomenon in human society that the interactive learning behavior takes place among different groups. Particles in IILPSO are divided into two swarms. The interswarm interactive learning (IIL) behavior is triggered when the best particle's fitness value of both the swarms does not improve for a certain number of iterations. According to the best particle's fitness value of each swarm, the softmax method and roulette method are used to determine the roles of the two swarms as the learning swarm and the learned swarm. In addition, the velocity mutation operator and global best vibration strategy are used to improve the algorithm's global search capability. The IIL strategy is applied to PSO with global star and local ring structures, which are termed as IILPSO-G and IILPSO-L algorithm, respectively. Numerical experiments are conducted to compare the proposed algorithms with eight popular PSO variants. From the experimental results, IILPSO demonstrates the good performance in terms of solution accuracy, convergence speed, and reliability. Finally, the variations of the population diversity in the entire search process provide an explanation why IILPSO performs effectively.

  11. Efficient Unsteady Flow Visualization with High-Order Access Dependencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiencymore » of pathline computation.« less

  12. Chaotic particle swarm optimization with mutation for classification.

    PubMed

    Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza

    2015-01-01

    In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms.

  13. Free-energy-based lattice Boltzmann model for the simulation of multiphase flows with density contrast.

    PubMed

    Shao, J Y; Shu, C; Huang, H B; Chew, Y T

    2014-03-01

    A free-energy-based phase-field lattice Boltzmann method is proposed in this work to simulate multiphase flows with density contrast. The present method is to improve the Zheng-Shu-Chew (ZSC) model [Zheng, Shu, and Chew, J. Comput. Phys. 218, 353 (2006)] for correct consideration of density contrast in the momentum equation. The original ZSC model uses the particle distribution function in the lattice Boltzmann equation (LBE) for the mean density and momentum, which cannot properly consider the effect of local density variation in the momentum equation. To correctly consider it, the particle distribution function in the LBE must be for the local density and momentum. However, when the LBE of such distribution function is solved, it will encounter a severe numerical instability. To overcome this difficulty, a transformation, which is similar to the one used in the Lee-Lin (LL) model [Lee and Lin, J. Comput. Phys. 206, 16 (2005)] is introduced in this work to change the particle distribution function for the local density and momentum into that for the mean density and momentum. As a result, the present model still uses the particle distribution function for the mean density and momentum, and in the meantime, considers the effect of local density variation in the LBE as a forcing term. Numerical examples demonstrate that both the present model and the LL model can correctly simulate multiphase flows with density contrast, and the present model has an obvious improvement over the ZSC model in terms of solution accuracy. In terms of computational time, the present model is less efficient than the ZSC model, but is much more efficient than the LL model.

  14. Study of comparison between Ultra-high Frequency (UHF) method and ultrasonic method on PD detection for GIS

    NASA Astrophysics Data System (ADS)

    Li, Yanran; Chen, Duo; Li, Li; Zhang, Jiwei; Li, Guang; Liu, Hongxia

    2017-11-01

    GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. However, few studies have been conducted on comparison of this two methods. From the view point of safety, it is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. This paper presents study aimed at clarifying the effect of UHF method and ultrasonic method for partial discharge caused by free metal particles in GIS. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for UHF method and ultrasonic method. A new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of detection localization.

  15. A deformable particle-in-cell method for advective transport in geodynamic modeling

    NASA Astrophysics Data System (ADS)

    Samuel, Henri

    2018-06-01

    This paper presents an improvement of the particle-in-cell method commonly used in geodynamic modeling for solving pure advection of sharply varying fields. Standard particle-in-cell approaches use particle kernels to transfer the information carried by the Lagrangian particles to/from the Eulerian grid. These kernels are generally one-dimensional and non-evolutive, which leads to the development of under- and over-sampling of the spatial domain by the particles. This reduces the accuracy of the solution, and may require the use of a prohibitive amount of particles in order to maintain the solution accuracy to an acceptable level. The new proposed approach relies on the use of deformable kernels that account for the strain history in the vicinity of particles. It results in a significant improvement of the spatial sampling by the particles, leading to a much higher accuracy of the numerical solution, for a reasonable computational extra cost. Various 2D tests were conducted to compare the performances of the deformable particle-in-cell method with the particle-in-cell approach. These consistently show that at comparable accuracy, the deformable particle-in-cell method was found to be four to six times more efficient than standard particle-in-cell approaches. The method could be adapted to 3D space and generalized to cases including motionless transport.

  16. Methods to Prescribe Particle Motion to Minimize Quadrature Error in Meshfree Methods

    NASA Astrophysics Data System (ADS)

    Templeton, Jeremy; Erickson, Lindsay; Morris, Karla; Poliakoff, David

    2015-11-01

    Meshfree methods are an attractive approach for simulating material systems undergoing large-scale deformation, such as spray break up, free surface flows, and droplets. Particles, which can be easily moved, are used as nodes and/or quadrature points rather than a relying on a fixed mesh. Most methods move particles according to the local fluid velocity that allows for the convection terms in the Navier-Stokes equations to be easily accounted for. However, this is a trade-off against numerical accuracy as the flow can often move particles to configurations with high quadrature error, and artificial compressibility is often required to prevent particles from forming undesirable regions of high and low concentrations. In this work, we consider the other side of the trade-off: moving particles based on reducing numerical error. Methods derived from molecular dynamics show that particles can be moved to minimize a surrogate for the solution error, resulting in substantially more accurate simulations at a fixed cost. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  17. Prediction of Skin Sensitization with a Particle Swarm Optimized Support Vector Machine

    PubMed Central

    Yuan, Hua; Huang, Jianping; Cao, Chenzhong

    2009-01-01

    Skin sensitization is the most commonly reported occupational illness, causing much suffering to a wide range of people. Identification and labeling of environmental allergens is urgently required to protect people from skin sensitization. The guinea pig maximization test (GPMT) and murine local lymph node assay (LLNA) are the two most important in vivo models for identification of skin sensitizers. In order to reduce the number of animal tests, quantitative structure-activity relationships (QSARs) are strongly encouraged in the assessment of skin sensitization of chemicals. This paper has investigated the skin sensitization potential of 162 compounds with LLNA results and 92 compounds with GPMT results using a support vector machine. A particle swarm optimization algorithm was implemented for feature selection from a large number of molecular descriptors calculated by Dragon. For the LLNA data set, the classification accuracies are 95.37% and 88.89% for the training and the test sets, respectively. For the GPMT data set, the classification accuracies are 91.80% and 90.32% for the training and the test sets, respectively. The classification performances were greatly improved compared to those reported in the literature, indicating that the support vector machine optimized by particle swarm in this paper is competent for the identification of skin sensitizers. PMID:19742136

  18. Gyroaveraging operations using adaptive matrix operators

    NASA Astrophysics Data System (ADS)

    Dominski, Julien; Ku, Seung-Hoe; Chang, Choong-Seock

    2018-05-01

    A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidal equilibrium has been studied. A successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.

  19. Emission characteristics and chemical components of size-segregated particulate matter in iron and steel industry

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Cheng, Shuiyuan; Yao, Sen; Xu, Tiebing; Zhang, Tingting; Ma, Yuetao; Wang, Hongliang; Duan, Wenjiao

    2018-06-01

    As one of the highest energy consumption and pollution industries, the iron and steel industry is regarded as a most important source of particulate matter emission. In this study, chemical components of size-segregated particulate matters (PM) emitted from different manufacturing units in iron and steel industry were sampled by a comprehensive sampling system. Results showed that the average particle mass concentration was highest in sintering process, followed by puddling, steelmaking and then rolling processes. PM samples were divided into eight size fractions for testing the chemical components, SO42- and NH4+ distributed more into fine particles while most of the Ca2+ was concentrated in coarse particles, the size distribution of mineral elements depended on the raw materials applied. Moreover, local database with PM chemical source profiles of iron and steel industry were built and applied in CMAQ modeling for simulating SO42- and NO3- concentration, results showed that the accuracy of model simulation improved with local chemical source profiles compared to the SPECIATE database. The results gained from this study are expected to be helpful to understand the components of PM in iron and steel industry and contribute to the source apportionment researches.

  20. Experimental Evaluation of UWB Indoor Positioning for Sport Postures

    PubMed Central

    Defraye, Jense; Steendam, Heidi; Gerlo, Joeri; De Clercq, Dirk; De Poorter, Eli

    2018-01-01

    Radio frequency (RF)-based indoor positioning systems (IPSs) use wireless technologies (including Wi-Fi, Zigbee, Bluetooth, and ultra-wide band (UWB)) to estimate the location of persons in areas where no Global Positioning System (GPS) reception is available, for example in indoor stadiums or sports halls. Of the above-mentioned forms of radio frequency (RF) technology, UWB is considered one of the most accurate approaches because it can provide positioning estimates with centimeter-level accuracy. However, it is not yet known whether UWB can also offer such accurate position estimates during strenuous dynamic activities in which moves are characterized by fast changes in direction and velocity. To answer this question, this paper investigates the capabilities of UWB indoor localization systems for tracking athletes during their complex (and most of the time unpredictable) movements. To this end, we analyze the impact of on-body tag placement locations and human movement patterns on localization accuracy and communication reliability. Moreover, two localization algorithms (particle filter and Kalman filter) with different optimizations (bias removal, non-line-of-sight (NLoS) detection, and path determination) are implemented. It is shown that although the optimal choice of optimization depends on the type of movement patterns, some of the improvements can reduce the localization error by up to 31%. Overall, depending on the selected optimization and on-body tag placement, our algorithms show good results in terms of positioning accuracy, with average errors in position estimates of 20 cm. This makes UWB a suitable approach for tracking dynamic athletic activities. PMID:29315267

  1. Numerical and experimental validation of a particle Galerkin method for metal grinding simulation

    NASA Astrophysics Data System (ADS)

    Wu, C. T.; Bui, Tinh Quoc; Wu, Youcai; Luo, Tzui-Liang; Wang, Morris; Liao, Chien-Chih; Chen, Pei-Yin; Lai, Yu-Sheng

    2018-03-01

    In this paper, a numerical approach with an experimental validation is introduced for modelling high-speed metal grinding processes in 6061-T6 aluminum alloys. The derivation of the present numerical method starts with an establishment of a stabilized particle Galerkin approximation. A non-residual penalty term from strain smoothing is introduced as a means of stabilizing the particle Galerkin method. Additionally, second-order strain gradients are introduced to the penalized functional for the regularization of damage-induced strain localization problem. To handle the severe deformation in metal grinding simulation, an adaptive anisotropic Lagrangian kernel is employed. Finally, the formulation incorporates a bond-based failure criterion to bypass the prospective spurious damage growth issues in material failure and cutting debris simulation. A three-dimensional metal grinding problem is analyzed and compared with the experimental results to demonstrate the effectiveness and accuracy of the proposed numerical approach.

  2. The development of laser speckle or particle image displacement velocimetry. Part 1: The role of photographic parameters

    NASA Technical Reports Server (NTRS)

    Lourenco, L. M. M.; Krothapalli, A.

    1987-01-01

    One of the difficult problems in experimental fluid dynamics remains the determination of the vorticity field in fluid flows. Recently, a novel velocity measurement technique, commonly known as Laser Speckle or Particle Image Displacement Velocimetry became available. This technique permits the simultaneous visualization of the 2 dimensional streamline pattern in unsteady flows and the quantification of the velocity field. The main advantage of this new technique is that the whole 2 dimensional velocity field can be recorded with great accuracy and spatial resolution, from which the instantaneous vorticity field can be easily obtained. A apparatus used for taking particle displacement images is described. Local coherent illumination by the probe laser beam yielded Young's fringes of good quality at almost every location of the flow field. These fringes were analyzed and the velocity and vorticity fields were derived. Several conclusions drawn are discussed.

  3. Lagrangian particle method for compressible fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang

    A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less

  4. Markerless human motion tracking using hierarchical multi-swarm cooperative particle swarm optimization.

    PubMed

    Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah

    2015-01-01

    The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.

  5. Lagrangian particle method for compressible fluid dynamics

    DOE PAGES

    Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang

    2018-02-09

    A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less

  6. Fully implicit Particle-in-cell algorithms for multiscale plasma simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis

    The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less

  7. Measurement of charged-particle stopping in warm-dense plasma

    DOE PAGES

    Zylstra, A.  B.; Frenje, J.  A.; Grabowski, P. E.; ...

    2015-05-27

    We measured the stopping of energetic protons in an isochorically-heated solid-density Be plasma with an electron temperature of ~32 eV, corresponding to moderately-coupled [(e²/a/(k BT e + E F ) ~ 0.3] and moderately-degenerate [k BT e/E F ~2] 'warm dense matter' (WDM) conditions. We present the first high-accuracy measurements of charged-particle energy loss through dense plasma, which shows an increased loss relative to cold matter, consistent with a reduced mean ionization potential. The data agree with stopping models based on an ad-hoc treatment of free and bound electrons, as well as the average-atom local-density approximation; this work is themore » first test of these theories in WDM plasma.« less

  8. Localization for robotic capsule looped by axially magnetized permanent-magnet ring based on hybrid strategy.

    PubMed

    Yang, Wanan; Li, Yan; Qin, Fengqing

    2015-01-01

    To actively maneuver a robotic capsule for interactive diagnosis in the gastrointestinal tract, visualizing accurate position and orientation of the capsule when it moves in the gastrointestinal tract is essential. A possible method that encloses the circuits, batteries, imaging device, etc into the capsule looped by an axially magnetized permanent-magnet ring is proposed. Based on expression of the axially magnetized permanent-magnet ring's magnetic fields, a localization and orientation model was established. An improved hybrid strategy that combines the advantages of particle-swarm optimization, clone algorithm, and the Levenberg-Marquardt algorithm was found to solve the model. Experiments showed that the hybrid strategy has good accuracy, convergence, and real time performance.

  9. Real-time localization of mobile device by filtering method for sensor fusion

    NASA Astrophysics Data System (ADS)

    Fuse, Takashi; Nagara, Keita

    2017-06-01

    Most of the applications with mobile devices require self-localization of the devices. GPS cannot be used in indoor environment, the positions of mobile devices are estimated autonomously by using IMU. Since the self-localization is based on IMU of low accuracy, and then the self-localization in indoor environment is still challenging. The selflocalization method using images have been developed, and the accuracy of the method is increasing. This paper develops the self-localization method without GPS in indoor environment by integrating sensors, such as IMU and cameras, on mobile devices simultaneously. The proposed method consists of observations, forecasting and filtering. The position and velocity of the mobile device are defined as a state vector. In the self-localization, observations correspond to observation data from IMU and camera (observation vector), forecasting to mobile device moving model (system model) and filtering to tracking method by inertial surveying and coplanarity condition and inverse depth model (observation model). Positions of a mobile device being tracked are estimated by system model (forecasting step), which are assumed as linearly moving model. Then estimated positions are optimized referring to the new observation data based on likelihood (filtering step). The optimization at filtering step corresponds to estimation of the maximum a posterior probability. Particle filter are utilized for the calculation through forecasting and filtering steps. The proposed method is applied to data acquired by mobile devices in indoor environment. Through the experiments, the high performance of the method is confirmed.

  10. Chaotic Particle Swarm Optimization with Mutation for Classification

    PubMed Central

    Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza

    2015-01-01

    In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937

  11. Microstructural evolution during sintering of copper particles studied by laboratory diffraction contrast tomography (LabDCT).

    PubMed

    McDonald, S A; Holzner, C; Lauridsen, E M; Reischig, P; Merkle, A P; Withers, P J

    2017-07-12

    Pressureless sintering of loose or compacted granular bodies at elevated temperature occurs by a combination of particle rearrangement, rotation, local deformation and diffusion, and grain growth. Understanding of how each of these processes contributes to the densification of a powder body is still immature. Here we report a fundamental study coupling the crystallographic imaging capability of laboratory diffraction contrast tomography (LabDCT) with conventional computed tomography (CT) in a time-lapse study. We are able to follow and differentiate these processes non-destructively and in three-dimensions during the sintering of a simple copper powder sample at 1050 °C. LabDCT quantifies particle rotation (to <0.05° accuracy) and grain growth while absorption CT simultaneously records the diffusion and deformation-related morphological changes of the sintering particles. We find that the rate of particle rotation is lowest for the more highly coordinated particles and decreases during sintering. Consequently, rotations are greater for surface breaking particles than for more highly coordinated interior ones. Both rolling (cooperative) and sliding particle rotations are observed. By tracking individual grains the grain growth/shrinkage kinetics during sintering are quantified grain by grain for the first time. Rapid, abnormal grain growth is observed for one grain while others either grow or are consumed more gradually.

  12. MRI quantification of pancreas motion as a function of patient setup for particle therapy —a preliminary study

    PubMed Central

    Riboldi, Marco; Gianoli, Chiara; Chirvase, Cezarina I.; Villa, Gaetano; Paganelli, Chiara; Summers, Paul E.; Tagaste, Barbara; Pella, Andrea; Fossati, Piero; Ciocca, Mario; Baroni, Guido; Valvo, Francesca; Orecchia, Roberto

    2016-01-01

    Particle therapy (PT) has shown positive therapeutic results in local control of locally advanced pancreatic lesions. PT effectiveness is highly influenced by target localization accuracy both in space, since the pancreas is located in proximity to radiosensitive vital organs, and in time as it is subject to substantial breathing‐related motion. The purpose of this preliminary study was to quantify pancreas range of motion under typical PT treatment conditions. Three common immobilization devices (vacuum cushion, thermoplastic mask, and compressor belt) were evaluated on five male patients in prone and supine positions. Retrospective four‐dimensional magnetic resonance imaging data were reconstructed for each condition and the pancreas was manually segmented on each of six breathing phases. A k‐means algorithm was then applied on the manually segmented map in order to obtain clusters representative of the three pancreas segments: head, body, and tail. Centers of mass (COM) for the pancreas and its segments were computed, as well as their displacements with respect to a reference breathing phase (beginning exhalation). The median three‐dimensional COM displacements were in the range of 3 mm. Latero–lateral and superior–inferior directions had a higher range of motion than the anterior–posterior direction. Motion analysis of the pancreas segments showed slightly lower COM displacements for the head cluster compared to the tail cluster, especially in prone position. Statistically significant differences were found within patients among the investigated setups. Hence a patient‐specific approach, rather than a general strategy, is suggested to define the optimal treatment setup in the frame of a millimeter positioning accuracy. PACS number(s): 87.55.‐x, 87.57.nm, 87.61 PMID:27685119

  13. Tracking Debris Shed by a Space-Shuttle Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Stuart, Phillip C.; Rogers, Stuart E.

    2009-01-01

    The DEBRIS software predicts the trajectories of debris particles shed by a space-shuttle launch vehicle during ascent, to aid in assessing potential harm to the space-shuttle orbiter and crew. The user specifies the location of release and other initial conditions for a debris particle. DEBRIS tracks the particle within an overset grid system by means of a computational fluid dynamics (CFD) simulation of the local flow field and a ballistic simulation that takes account of the mass of the particle and its aerodynamic properties in the flow field. The computed particle trajectory is stored in a file to be post-processed by other software for viewing and analyzing the trajectory. DEBRIS supplants a prior debris tracking code that took .15 minutes to calculate a single particle trajectory: DEBRIS can calculate 1,000 trajectories in .20 seconds on a desktop computer. Other improvements over the prior code include adaptive time-stepping to ensure accuracy, forcing at least one step per grid cell to ensure resolution of all CFD-resolved flow features, ability to simulate rebound of debris from surfaces, extensive error checking, a builtin suite of test cases, and dynamic allocation of memory.

  14. Determining the refractive index of particles using glare-point imaging technique

    NASA Astrophysics Data System (ADS)

    Meng, Rui; Ge, Baozhen; Lu, Qieni; Yu, Xiaoxue

    2018-04-01

    A method of measuring the refractive index of a particle is presented from a glare-point image. The space of a doublet image of a particle can be determined with high accuracy by using auto-correlation and Gaussian interpolation, and then the refractive index is obtained from glare-point separation, and a factor that may influence the accuracy of glare-point separation is explored. Experiments are carried out for three different kinds of particles, including polystyrene latex particles, glass beads, and water droplets, whose measuring accuracy is improved by the data fitting method. The research results show that the method presented in this paper is feasible and beneficial to applications such as spray and atmospheric composition measurements.

  15. Single particle tracking through highly scattering media with multiplexed two-photon excitation

    NASA Astrophysics Data System (ADS)

    Perillo, Evan; Liu, Yen-Liang; Liu, Cong; Yeh, Hsin-Chih; Dunn, Andrew K.

    2015-03-01

    3D single-particle tracking (SPT) has been a pivotal tool to furthering our understanding of dynamic cellular processes in complex biological systems, with a molecular localization accuracy (10-100 nm) often better than the diffraction limit of light. However, current SPT techniques utilize either CCDs or a confocal detection scheme which not only suffer from poor temporal resolution but also limit tracking to a depth less than one scattering mean free path in the sample (typically <15μm). In this report we highlight our novel design for a spatiotemporally multiplexed two-photon microscope which is able to reach sub-diffraction-limit tracking accuracy and sub-millisecond temporal resolution, but with a dramatically extended SPT range of up to 200 μm through dense cell samples. We have validated our microscope by tracking (1) fluorescent nanoparticles in a prescribed motion inside gelatin gel (with 1% intralipid) and (2) labeled single EGFR complexes inside skin cancer spheroids (at least 8 layers of cells thick) for ~10 minutes. Furthermore we discuss future capabilities of our multiplexed two-photon microscope design, specifically to the extension of (1) simultaneous multicolor tracking (i.e. spatiotemporal co-localization analysis) and (2) FRET studies (i.e. lifetime analysis). The high resolution, high depth penetration, and multicolor features of this microscope make it well poised to study a variety of molecular scale dynamics in the cell, especially related to cellular trafficking studies with in vitro tumor models and in vivo.

  16. Feature Extraction of Electronic Nose Signals Using QPSO-Based Multiple KFDA Signal Processing

    PubMed Central

    Wen, Tailai; Huang, Daoyu; Lu, Kun; Deng, Changjian; Zeng, Tanyue; Yu, Song; He, Zhiyi

    2018-01-01

    The aim of this research was to enhance the classification accuracy of an electronic nose (E-nose) in different detecting applications. During the learning process of the E-nose to predict the types of different odors, the prediction accuracy was not quite satisfying because the raw features extracted from sensors’ responses were regarded as the input of a classifier without any feature extraction processing. Therefore, in order to obtain more useful information and improve the E-nose’s classification accuracy, in this paper, a Weighted Kernels Fisher Discriminant Analysis (WKFDA) combined with Quantum-behaved Particle Swarm Optimization (QPSO), i.e., QWKFDA, was presented to reprocess the original feature matrix. In addition, we have also compared the proposed method with quite a few previously existing ones including Principal Component Analysis (PCA), Locality Preserving Projections (LPP), Fisher Discriminant Analysis (FDA) and Kernels Fisher Discriminant Analysis (KFDA). Experimental results proved that QWKFDA is an effective feature extraction method for E-nose in predicting the types of wound infection and inflammable gases, which shared much higher classification accuracy than those of the contrast methods. PMID:29382146

  17. Feature Extraction of Electronic Nose Signals Using QPSO-Based Multiple KFDA Signal Processing.

    PubMed

    Wen, Tailai; Yan, Jia; Huang, Daoyu; Lu, Kun; Deng, Changjian; Zeng, Tanyue; Yu, Song; He, Zhiyi

    2018-01-29

    The aim of this research was to enhance the classification accuracy of an electronic nose (E-nose) in different detecting applications. During the learning process of the E-nose to predict the types of different odors, the prediction accuracy was not quite satisfying because the raw features extracted from sensors' responses were regarded as the input of a classifier without any feature extraction processing. Therefore, in order to obtain more useful information and improve the E-nose's classification accuracy, in this paper, a Weighted Kernels Fisher Discriminant Analysis (WKFDA) combined with Quantum-behaved Particle Swarm Optimization (QPSO), i.e., QWKFDA, was presented to reprocess the original feature matrix. In addition, we have also compared the proposed method with quite a few previously existing ones including Principal Component Analysis (PCA), Locality Preserving Projections (LPP), Fisher Discriminant Analysis (FDA) and Kernels Fisher Discriminant Analysis (KFDA). Experimental results proved that QWKFDA is an effective feature extraction method for E-nose in predicting the types of wound infection and inflammable gases, which shared much higher classification accuracy than those of the contrast methods.

  18. Gyroaveraging operations using adaptive matrix operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dominski, Julien; Ku, Seung -Hoe; Chang, Choong -Seock

    A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less

  19. Gyroaveraging operations using adaptive matrix operators

    DOE PAGES

    Dominski, Julien; Ku, Seung -Hoe; Chang, Choong -Seock

    2018-05-17

    A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less

  20. Extended Salecker-Wigner formula for optimal accuracy in reading a clock via a massive signal particle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudaka, Shoju; Matsumoto, Shuichi

    2007-07-15

    In order to acquire an extended Salecker-Wigner formula from which to derive the optimal accuracy in reading a clock with a massive particle as the signal, von Neumann's classical measurement is employed, by which simultaneously both position and momentum of the signal particle can be measured approximately. By an appropriate selection of wave function for the initial state of the composite system (a clock and a signal particle), the formula is derived accurately. Valid ranges of the running time of a clock with a given optimal accuracy are also given. The extended formula means that contrary to the Salecker-Wigner formulamore » there exists the possibility of a higher accuracy of time measurement, even if the mass of the clock is very small.« less

  1. Three dimensional indoor positioning based on visible light with Gaussian mixture sigma-point particle filter technique

    NASA Astrophysics Data System (ADS)

    Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen

    2015-01-01

    Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.

  2. The magnetic particle in a box: Analytic and micromagnetic analysis of probe-localized spin wave modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adur, Rohan, E-mail: adur@physics.osu.edu; Du, Chunhui; Manuilov, Sergei A.

    2015-05-07

    The dipole field from a probe magnet can be used to localize a discrete spectrum of standing spin wave modes in a continuous ferromagnetic thin film without lithographic modification to the film. Obtaining the resonance field for a localized mode is not trivial due to the effect of the confined and inhomogeneous magnetization precession. We compare the results of micromagnetic and analytic methods to find the resonance field of localized modes in a ferromagnetic thin film, and investigate the accuracy of these methods by comparing with a numerical minimization technique that assumes Bessel function modes with pinned boundary conditions. Wemore » find that the micromagnetic technique, while computationally more intensive, reveals that the true magnetization profiles of localized modes are similar to Bessel functions with gradually decaying dynamic magnetization at the mode edges. We also find that an analytic solution, which is simple to implement and computationally much faster than other methods, accurately describes the resonance field of localized modes when exchange fields are negligible, and demonstrating the accessibility of localized mode analysis.« less

  3. Spectral modeling of radiation in combustion systems

    NASA Astrophysics Data System (ADS)

    Pal, Gopalendu

    Radiation calculations are important in combustion due to the high temperatures encountered but has not been studied in sufficient detail in the case of turbulent flames. Radiation calculations for such problems require accurate, robust, and computationally efficient models for the solution of radiative transfer equation (RTE), and spectral properties of radiation. One more layer of complexity is added in predicting the overall heat transfer in turbulent combustion systems due to nonlinear interactions between turbulent fluctuations and radiation. The present work is aimed at the development of finite volume-based high-accuracy thermal radiation modeling, including spectral radiation properties in order to accurately capture turbulence-radiation interactions (TRI) and predict heat transfer in turbulent combustion systems correctly and efficiently. The turbulent fluctuations of temperature and chemical species concentrations have strong effects on spectral radiative intensities, and TRI create a closure problem when the governing partial differential equations are averaged. Recently, several approaches have been proposed to take TRI into account. Among these attempts the most promising approaches are the probability density function (PDF) methods, which can treat nonlinear coupling between turbulence and radiative emission exactly, i.e., "emission TRI". The basic idea of the PDF method is to treat physical variables as random variables and to solve the PDF transport equation stochastically. The actual reacting flow field is represented by a large number of discrete stochastic particles each carrying their own random variable values and evolving with time. The mean value of any function of those random variables, such as the chemical source term, can be evaluated exactly by taking the ensemble average of particles. The local emission term belongs to this class and thus, can be evaluated directly and exactly from particle ensembles. However, the local absorption term involves interactions between the local particle and energy emitted by all other particles and, hence, cannot be obtained from particle ensembles directly. To close the nonlinear coupling between turbulence and absorption, i.e., "absorption TRI", an optically thin fluctuation approximation can be applied to virtually all combustion problems and obtain acceptable accuracy. In the present study a composition-PDF method is applied, in which only the temperature and the species concentrations are treated as random variables. A closely coupled hybrid finite-volume/Monte Carlo scheme is adopted, in which the Monte Carlo method is used to solve the composition-PDF for chemical reactions and the finite volume method is used to solve for the flow field and radiation. Spherical harmonics method-based finite volume solvers (P-1 and P-3) are developed using the data structures of the high fidelity open-source code flow software OpenFOAM. Spectral radiative properties of the participating medium are modeled using full-spectrum k-distribution methods. Advancements of basic k-distribution methods are performed for nongray nonhomogeneous gas- and particulate-phase (soot, fuel droplets, ash, etc.) participating media using multi-scale and multi-group based approaches. These methods achieve close-to benchmark line-by-line (LBL) accuracy in strongly inhomogeneous media at a tiny fraction of LBL's computational cost. A portable spectral module is developed, which includes all the basic to advanced k-distribution methods along with the precompiled accurate and compact k-distribution databases. The P-1 /P-3 RTE solver coupled with the spectral module is used in conjunction with the combined Reynolds-averaged Navier-Stokes (RANS) and composition-PDF-based turbulence-chemistry solver to investigate TRI in multiphase turbulent combustion systems. The combustion solvers developed in this study is employed to simulate several turbulent jet flames, such as Sandia Flame D, and artificial nonsooting and sooting flames derived from Flame D. The effects of combustion chemistry, radiation and TRI on total heat transfer and pollutant (such as NO x) generation are studied for the above flames. The accuracy of the overall combustion solver is assessed by comparing it with the experimental data for Flame D. Comparison of the accuracy and the computational cost among various spectral models and RTE solvers is extensively done on the artificial flames derived from Flame D to demonstrate the necessity of accurate modeling of radiation in combustion problems.

  4. Effects of mesh style and grid convergence on particle deposition in bifurcating airway models with comparisons to experimental data.

    PubMed

    Longest, P Worth; Vinchurkar, Samir

    2007-04-01

    A number of research studies have employed a wide variety of mesh styles and levels of grid convergence to assess velocity fields and particle deposition patterns in models of branching biological systems. Generating structured meshes based on hexahedral elements requires significant time and effort; however, these meshes are often associated with high quality solutions. Unstructured meshes that employ tetrahedral elements can be constructed much faster but may increase levels of numerical diffusion, especially in tubular flow systems with a primary flow direction. The objective of this study is to better establish the effects of mesh generation techniques and grid convergence on velocity fields and particle deposition patterns in bifurcating respiratory models. In order to achieve this objective, four widely used mesh styles including structured hexahedral, unstructured tetrahedral, flow adaptive tetrahedral, and hybrid grids have been considered for two respiratory airway configurations. Initial particle conditions tested are based on the inlet velocity profile or the local inlet mass flow rate. Accuracy of the simulations has been assessed by comparisons to experimental in vitro data available in the literature for the steady-state velocity field in a single bifurcation model as well as the local particle deposition fraction in a double bifurcation model. Quantitative grid convergence was assessed based on a grid convergence index (GCI), which accounts for the degree of grid refinement. The hexahedral mesh was observed to have GCI values that were an order of magnitude below the unstructured tetrahedral mesh values for all resolutions considered. Moreover, the hexahedral mesh style provided GCI values of approximately 1% and reduced run times by a factor of 3. Based on comparisons to empirical data, it was shown that inlet particle seedings should be consistent with the local inlet mass flow rate. Furthermore, the mesh style was found to have an observable effect on cumulative particle depositions with the hexahedral solution most closely matching empirical results. Future studies are needed to assess other mesh generation options including various forms of the hybrid configuration and unstructured hexahedral meshes.

  5. Time-Domain Modeling of RF Antennas and Plasma-Surface Interactions

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Smithe, David N.

    2017-10-01

    Recent advances in finite-difference time-domain (FDTD) modeling techniques allow plasma-surface interactions such as sheath formation and sputtering to be modeled concurrently with the physics of antenna near- and far-field behavior and ICRF power flow. Although typical sheath length scales (micrometers) are much smaller than the wavelengths of fast (tens of cm) and slow (millimeter) waves excited by the antenna, sheath behavior near plasma-facing antenna components can be represented by a sub-grid kinetic sheath boundary condition, from which RF-rectified sheath potential variation over the surface is computed as a function of current flow and local plasma parameters near the wall. These local time-varying sheath potentials can then be used, in tandem with particle-in-cell (PIC) models of the edge plasma, to study sputtering effects. Particle strike energies at the wall can be computed more accurately, consistent with their passage through the known potential of the sheath, such that correspondingly increased accuracy of sputtering yields and heat/particle fluxes to antenna surfaces is obtained. The new simulation capabilities enable time-domain modeling of plasma-surface interactions and ICRF physics in realistic experimental configurations at unprecedented spatial resolution. We will present results/animations from high-performance (10k-100k core) FDTD/PIC simulations of Alcator C-Mod antenna operation.

  6. Fast-match on particle swarm optimization with variant system mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Yuehuang; Fang, Xin; Chen, Jie

    2018-03-01

    Fast-Match is a fast and effective algorithm for approximate template matching under 2D affine transformations, which can match the target with maximum similarity without knowing the target gesture. It depends on the minimum Sum-of-Absolute-Differences (SAD) error to obtain the best affine transformation. The algorithm is widely used in the field of matching images because of its fastness and robustness. In this paper, our approach is to search an approximate affine transformation over Particle Swarm Optimization (PSO) algorithm. We treat each potential transformation as a particle that possesses memory function. Each particle is given a random speed and flows throughout the 2D affine transformation space. To accelerate the algorithm and improve the abilities of seeking the global excellent result, we have introduced the variant system mechanism on this basis. The benefit is that we can avoid matching with huge amount of potential transformations and falling into local optimal condition, so that we can use a few transformations to approximate the optimal solution. The experimental results prove that our method has a faster speed and a higher accuracy performance with smaller affine transformation space.

  7. Spatial Variability of Organic Carbon in a Fractured Mudstone and Its Effect on the Retention and Release of Trichloroethene (TCE)

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2016-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  8. Development of a custom-designed echo particle image velocimetry system for multi-component hemodynamic measurements: system characterization and initial experimental results

    NASA Astrophysics Data System (ADS)

    Liu, Lingli; Zheng, Hairong; Williams, Logan; Zhang, Fuxing; Wang, Rui; Hertzberg, Jean; Shandas, Robin

    2008-03-01

    We have recently developed an ultrasound-based velocimetry technique, termed echo particle image velocimetry (Echo PIV), to measure multi-component velocity vectors and local shear rates in arteries and opaque fluid flows by identifying and tracking flow tracers (ultrasound contrast microbubbles) within these flow fields. The original system was implemented on images obtained from a commercial echocardiography scanner. Although promising, this system was limited in spatial resolution and measurable velocity range. In this work, we propose standard rules for characterizing Echo PIV performance and report on a custom-designed Echo PIV system with increased spatial resolution and measurable velocity range. Then we employed this system for initial measurements on tube flows, rotating flows and in vitro carotid artery and abdominal aortic aneurysm (AAA) models to acquire the local velocity and shear rate distributions in these flow fields. The experimental results verified the accuracy of this technique and indicated the promise of the custom Echo PIV system in capturing complex flow fields non-invasively.

  9. Tabu search and binary particle swarm optimization for feature selection using microarray data.

    PubMed

    Chuang, Li-Yeh; Yang, Cheng-Huei; Yang, Cheng-Hong

    2009-12-01

    Gene expression profiles have great potential as a medical diagnosis tool because they represent the state of a cell at the molecular level. In the classification of cancer type research, available training datasets generally have a fairly small sample size compared to the number of genes involved. This fact poses an unprecedented challenge to some classification methodologies due to training data limitations. Therefore, a good selection method for genes relevant for sample classification is needed to improve the predictive accuracy, and to avoid incomprehensibility due to the large number of genes investigated. In this article, we propose to combine tabu search (TS) and binary particle swarm optimization (BPSO) for feature selection. BPSO acts as a local optimizer each time the TS has been run for a single generation. The K-nearest neighbor method with leave-one-out cross-validation and support vector machine with one-versus-rest serve as evaluators of the TS and BPSO. The proposed method is applied and compared to the 11 classification problems taken from the literature. Experimental results show that our method simplifies features effectively and either obtains higher classification accuracy or uses fewer features compared to other feature selection methods.

  10. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation

    PubMed Central

    Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija

    2018-01-01

    The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918

  11. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  12. rpSPH: a novel smoothed particle hydrodynamics algorithm

    NASA Astrophysics Data System (ADS)

    Abel, Tom

    2011-05-01

    We suggest a novel discretization of the momentum equation for smoothed particle hydrodynamics (SPH) and show that it significantly improves the accuracy of the obtained solutions. Our new formulation which we refer to as relative pressure SPH, rpSPH, evaluates the pressure force with respect to the local pressure. It respects Newton's first law of motion and applies forces to particles only when there is a net force acting upon them. This is in contrast to standard SPH which explicitly uses Newton's third law of motion continuously applying equal but opposite forces between particles. rpSPH does not show the unphysical particle noise, the clumping or banding instability, unphysical surface tension and unphysical scattering of different mass particles found for standard SPH. At the same time, it uses fewer computational operations and only changes a single line in existing SPH codes. We demonstrate its performance on isobaric uniform density distributions, uniform density shearing flows, the Kelvin-Helmholtz and Rayleigh-Taylor instabilities, the Sod shock tube, the Sedov-Taylor blast wave and a cosmological integration of the Santa Barbara galaxy cluster formation test. rpSPH is an improvement in these cases. The improvements come at the cost of giving up exact momentum conservation of the scheme. Consequently, one can also obtain unphysical solutions particularly at low resolutions.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Hong; Liu, Jian; Xiao, Jianyuan

    Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinearmore » Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.« less

  14. Improving z-tracking accuracy in the two-photon single-particle tracking microscope.

    PubMed

    Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C

    2015-10-12

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.

  15. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C.; Liu, Y.-L.; Perillo, E. P.

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less

  16. Kalman filter with a linear state model for PDR+WLAN positioning and its application to assisting a particle filter

    NASA Astrophysics Data System (ADS)

    Raitoharju, Matti; Nurminen, Henri; Piché, Robert

    2015-12-01

    Indoor positioning based on wireless local area network (WLAN) signals is often enhanced using pedestrian dead reckoning (PDR) based on an inertial measurement unit. The state evolution model in PDR is usually nonlinear. We present a new linear state evolution model for PDR. In simulated-data and real-data tests of tightly coupled WLAN-PDR positioning, the positioning accuracy with this linear model is better than with the traditional models when the initial heading is not known, which is a common situation. The proposed method is computationally light and is also suitable for smoothing. Furthermore, we present modifications to WLAN positioning based on Gaussian coverage areas and show how a Kalman filter using the proposed model can be used for integrity monitoring and (re)initialization of a particle filter.

  17. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  18. [Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].

    PubMed

    Furuta, Takuya

    2017-01-01

    Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.

  19. Semi-Lagrangian particle methods for high-dimensional Vlasov-Poisson systems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri

    2018-07-01

    This paper deals with the implementation of high order semi-Lagrangian particle methods to handle high dimensional Vlasov-Poisson systems. It is based on recent developments in the numerical analysis of particle methods and the paper focuses on specific algorithmic features to handle large dimensions. The methods are tested with uniform particle distributions in particular against a recent multi-resolution wavelet based method on a 4D plasma instability case and a 6D gravitational case. Conservation properties, accuracy and computational costs are monitored. The excellent accuracy/cost trade-off shown by the method opens new perspective for accurate simulations of high dimensional kinetic equations by particle methods.

  20. Visual tracking of da Vinci instruments for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2014-03-01

    Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.

  1. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  2. Free Mesh Method: fundamental conception, algorithms and accuracy study

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The finite element method (FEM) has been commonly employed in a variety of fields as a computer simulation method to solve such problems as solid, fluid, electro-magnetic phenomena and so on. However, creation of a quality mesh for the problem domain is a prerequisite when using FEM, which becomes a major part of the cost of a simulation. It is natural that the concept of meshless method has evolved. The free mesh method (FMM) is among the typical meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, especially on parallel processors. FMM is an efficient node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm for the finite element calculations. In this paper, FMM and its variation are reviewed focusing on their fundamental conception, algorithms and accuracy. PMID:21558752

  3. Tracking Image Correlation: Combining Single-Particle Tracking and Image Correlation

    PubMed Central

    Dupont, A.; Stirnnagel, K.; Lindemann, D.; Lamb, D.C.

    2013-01-01

    The interactions and coordination of biomolecules are crucial for most cellular functions. The observation of protein interactions in live cells may provide a better understanding of the underlying mechanisms. After fluorescent labeling of the interacting partners and live-cell microscopy, the colocalization is generally analyzed by quantitative global methods. Recent studies have addressed questions regarding the individual colocalization of moving biomolecules, usually by using single-particle tracking (SPT) and comparing the fluorescent intensities in both color channels. Here, we introduce a new method that combines SPT and correlation methods to obtain a dynamical 3D colocalization analysis along single trajectories of dual-colored particles. After 3D tracking, the colocalization is computed at each particle’s position via the local 3D image cross correlation of the two detection channels. For every particle analyzed, the output consists of the 3D trajectory, the time-resolved 3D colocalization information, and the fluorescence intensity in both channels. In addition, the cross-correlation analysis shows the 3D relative movement of the two fluorescent labels with an accuracy of 30 nm. We apply this method to the tracking of viral fusion events in live cells and demonstrate its capacity to obtain the time-resolved colocalization status of single particles in dense and noisy environments. PMID:23746509

  4. An updated Lagrangian particle hydrodynamics (ULPH) for Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Tu, Qingsong; Li, Shaofan

    2017-11-01

    In this work, we have developed an updated Lagrangian particle hydrodynamics (ULPH) for Newtonian fluid. Unlike the smoothed particle hydrodynamics, the non-local particle hydrodynamics formulation proposed here is consistent and convergence. Unlike the state-based peridynamics, the discrete particle dynamics proposed here has no internal material bond between particles, and it is not formulated with respect to initial or a fixed referential configuration. In specific, we have shown that (1) the non-local update Lagrangian particle hydrodynamics formulation converges to the conventional local fluid mechanics formulation; (2) the non-local updated Lagrangian particle hydrodynamics can capture arbitrary flow discontinuities without any changes in the formulation, and (3) the proposed non-local particle hydrodynamics is computationally efficient and robust.

  5. A Generalized Eulerian-Lagrangian Analysis, with Application to Liquid Flows with Vapor Bubbles

    NASA Technical Reports Server (NTRS)

    Dejong, Frederik J.; Meyyappan, Meyya

    1993-01-01

    Under a NASA MSFC SBIR Phase 2 effort an analysis has been developed for liquid flows with vapor bubbles such as those in liquid rocket engine components. The analysis is based on a combined Eulerian-Lagrangian technique, in which Eulerian conservation equations are solved for the liquid phase, while Lagrangian equations of motion are integrated in computational coordinates for the vapor phase. The novel aspect of the Lagrangian analysis developed under this effort is that it combines features of the so-called particle distribution approach with those of the so-called particle trajectory approach and can, in fact, be considered as a generalization of both of those traditional methods. The result of this generalization is a reduction in CPU time and memory requirements. Particle time step (stability) limitations have been eliminated by semi-implicit integration of the particle equations of motion (and, for certain applications, the particle temperature equation), although practical limitations remain in effect for reasons of accuracy. The analysis has been applied to the simulation of cavitating flow through a single-bladed section of a labyrinth seal. Models for the simulation of bubble formation and growth have been included, as well as models for bubble drag and heat transfer. The results indicate that bubble formation is more or less 'explosive'. for a given flow field, the number density of bubble nucleation sites is very sensitive to the vapor properties and the surface tension. The bubble motion, on the other hand, is much less sensitive to the properties, but is affected strongly by the local pressure gradients in the flow field. In situations where either the material properties or the flow field are not known with sufficient accuracy, parametric studies can be carried out rapidly to assess the effect of the important variables. Future work will include application of the analysis to cavitation in inducer flow fields.

  6. Molecular Excitation Energies from Time-Dependent Density Functional Theory Employing Random-Phase Approximation Hessians with Exact Exchange.

    PubMed

    Heßelmann, Andreas

    2015-04-14

    Molecular excitation energies have been calculated with time-dependent density-functional theory (TDDFT) using random-phase approximation Hessians augmented with exact exchange contributions in various orders. It has been observed that this approach yields fairly accurate local valence excitations if combined with accurate asymptotically corrected exchange-correlation potentials used in the ground-state Kohn-Sham calculations. The inclusion of long-range particle-particle with hole-hole interactions in the kernel leads to errors of 0.14 eV only for the lowest excitations of a selection of three alkene, three carbonyl, and five azabenzene molecules, thus surpassing the accuracy of a number of common TDDFT and even some wave function correlation methods. In the case of long-range charge-transfer excitations, the method typically underestimates accurate reference excitation energies by 8% on average, which is better than with standard hybrid-GGA functionals but worse compared to range-separated functional approximations.

  7. A Modified Mean Gray Wolf Optimization Approach for Benchmark and Biomedical Problems.

    PubMed

    Singh, Narinder; Singh, S B

    2017-01-01

    A modified variant of gray wolf optimization algorithm, namely, mean gray wolf optimization algorithm has been developed by modifying the position update (encircling behavior) equations of gray wolf optimization algorithm. The proposed variant has been tested on 23 standard benchmark well-known test functions (unimodal, multimodal, and fixed-dimension multimodal), and the performance of modified variant has been compared with particle swarm optimization and gray wolf optimization. Proposed algorithm has also been applied to the classification of 5 data sets to check feasibility of the modified variant. The results obtained are compared with many other meta-heuristic approaches, ie, gray wolf optimization, particle swarm optimization, population-based incremental learning, ant colony optimization, etc. The results show that the performance of modified variant is able to find best solutions in terms of high level of accuracy in classification and improved local optima avoidance.

  8. A New Stochastic Technique for Painlevé Equation-I Using Neural Network Optimized with Swarm Intelligence

    PubMed Central

    Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor

    2012-01-01

    A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371

  9. Simulation of windblown dust transport from a mine tailings impoundment using a computational fluid dynamics model

    NASA Astrophysics Data System (ADS)

    Stovern, Michael; Felix, Omar; Csavina, Janae; Rine, Kyle P.; Russell, MacKenzie R.; Jones, Robert M.; King, Matt; Betterton, Eric A.; Sáez, A. Eduardo

    2014-09-01

    Mining operations are potential sources of airborne particulate metal and metalloid contaminants through both direct smelter emissions and wind erosion of mine tailings. The warmer, drier conditions predicted for the Southwestern US by climate models may make contaminated atmospheric dust and aerosols increasingly important, due to potential deleterious effects on human health and ecology. Dust emissions and dispersion of dust and aerosol from the Iron King Mine tailings in Dewey-Humboldt, Arizona, a Superfund site, are currently being investigated through in situ field measurements and computational fluid dynamics modeling. These tailings are heavily contaminated with lead and arsenic. Using a computational fluid dynamics model, we model dust transport from the mine tailings to the surrounding region. The model includes gaseous plume dispersion to simulate the transport of the fine aerosols, while individual particle transport is used to track the trajectories of larger particles and to monitor their deposition locations. In order to improve the accuracy of the dust transport simulations, both regional topographical features and local weather patterns have been incorporated into the model simulations. Results show that local topography and wind velocity profiles are the major factors that control deposition.

  10. Simulation of windblown dust transport from a mine tailings impoundment using a computational fluid dynamics model.

    PubMed

    Stovern, Michael; Felix, Omar; Csavina, Janae; Rine, Kyle P; Russell, MacKenzie R; Jones, Robert M; King, Matt; Betterton, Eric A; Sáez, A Eduardo

    2014-09-01

    Mining operations are potential sources of airborne particulate metal and metalloid contaminants through both direct smelter emissions and wind erosion of mine tailings. The warmer, drier conditions predicted for the Southwestern US by climate models may make contaminated atmospheric dust and aerosols increasingly important, due to potential deleterious effects on human health and ecology. Dust emissions and dispersion of dust and aerosol from the Iron King Mine tailings in Dewey-Humboldt, Arizona, a Superfund site, are currently being investigated through in situ field measurements and computational fluid dynamics modeling. These tailings are heavily contaminated with lead and arsenic. Using a computational fluid dynamics model, we model dust transport from the mine tailings to the surrounding region. The model includes gaseous plume dispersion to simulate the transport of the fine aerosols, while individual particle transport is used to track the trajectories of larger particles and to monitor their deposition locations. In order to improve the accuracy of the dust transport simulations, both regional topographical features and local weather patterns have been incorporated into the model simulations. Results show that local topography and wind velocity profiles are the major factors that control deposition.

  11. Simulation of windblown dust transport from a mine tailings impoundment using a computational fluid dynamics model

    PubMed Central

    Stovern, Michael; Felix, Omar; Csavina, Janae; Rine, Kyle P.; Russell, MacKenzie R.; Jones, Robert M.; King, Matt; Betterton, Eric A.; Sáez, A. Eduardo

    2014-01-01

    Mining operations are potential sources of airborne particulate metal and metalloid contaminants through both direct smelter emissions and wind erosion of mine tailings. The warmer, drier conditions predicted for the Southwestern US by climate models may make contaminated atmospheric dust and aerosols increasingly important, due to potential deleterious effects on human health and ecology. Dust emissions and dispersion of dust and aerosol from the Iron King Mine tailings in Dewey-Humboldt, Arizona, a Superfund site, are currently being investigated through in situ field measurements and computational fluid dynamics modeling. These tailings are heavily contaminated with lead and arsenic. Using a computational fluid dynamics model, we model dust transport from the mine tailings to the surrounding region. The model includes gaseous plume dispersion to simulate the transport of the fine aerosols, while individual particle transport is used to track the trajectories of larger particles and to monitor their deposition locations. In order to improve the accuracy of the dust transport simulations, both regional topographical features and local weather patterns have been incorporated into the model simulations. Results show that local topography and wind velocity profiles are the major factors that control deposition. PMID:25621085

  12. Revision of FMM-Yukawa: An adaptive fast multipole method for screened Coulomb interactions

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Huang, Jingfang; Pitsianis, Nikos P.; Sun, Xiaobai

    2010-12-01

    FMM-YUKAWA is a mathematical software package primarily for rapid evaluation of the screened Coulomb interactions of N particles in three dimensional space. Since its release, we have revised and re-organized the data structure, software architecture, and user interface, for the purpose of enabling more flexible, broader and easier use of the package. The package and its documentation are available at http://www.fastmultipole.org/, along with a few other closely related mathematical software packages. New version program summaryProgram title: FMM-Yukawa Catalogue identifier: AEEQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 2.0 No. of lines in distributed program, including test data, etc.: 78 704 No. of bytes in distributed program, including test data, etc.: 854 265 Distribution format: tar.gz Programming language: FORTRAN 77, FORTRAN 90, and C. Requires gcc and gfortran version 4.4.3 or later Computer: All Operating system: Any Classification: 4.8, 4.12 Catalogue identifier of previous version: AEEQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2331 Does the new version supersede the previous version?: Yes Nature of problem: To evaluate the screened Coulomb potential and force field of N charged particles, and to evaluate a convolution type integral where the Green's function is the fundamental solution of the modified Helmholtz equation. Solution method: The new version of fast multipole method (FMM) that diagonalizes the multipole-to-local translation operator is applied with the tree structure adaptive to sample particle locations. Reasons for new version: To handle much larger particle ensembles, to enable the iterative use of the subroutines in a solver, and to remove potential contention in assignments for parallelization. Summary of revisions: The software package FMM-Yukawa has been revised and re-organized in data structure, software architecture, programming methods, and user interface. The revision enables more flexible use of the package and economic use of memory resources. It consists of five stages. The initial stage (stage 1) determines, based on the accuracy requirement and FMM theory, the length of multipole expansions and the number of quadrature points for diagonalization, and loads the quadrature nodes and weights that are computed off line. Stage 2 constructs the oct-tree and interaction lists, with adaptation to the sparsity or density of particles and employing a dynamic memory allocation scheme at every tree level. Stage 3 executes the core FMM subroutine for numerical calculation of the particle interactions. The subroutine can now be used iteratively as in a solver, while the particle locations remain the same. Stage 4 releases the memory allocated in Stage 2 for the adaptive tree and interaction lists. The user can modify the iterative routine easily. When the particle locations are changed such as in a molecular dynamics simulation, stage 2 to 4 can also be used together repeatedly. The final stage releases the memory space used for the quadrature and other remaining FMM parameters. Programs at the stage level and at the user interface are re-written in the C programming language, while most of the translation and interaction operations remain in FORTRAN. As a result of the change in data structures and memory allocation, the revised package can accommodate much larger particle ensembles while maintaining the same accuracy-efficiency performance. The new version is also developed as an important precursor to its parallel counterpart on multi-core or many core processors in a shared memory programming environment. Particularly, in order to ensure mutual exclusion in concurrent updates without incurring extra latency, we have replaced all the assignment statements at a source box that put its data to multiple target boxes with assignments at every target box that gather data from source boxes. This amounts to replacing the column version of matrix-vector multiplication with the row version. The matrix here, however, is in compressive representation. Sufficient care is taken in the revision not to alter the algorithmic complexity or numerical behavior, as concurrent writing potentially takes place in the upward calculation of the multipole expansion coefficients, interactions at every level of the FMM tree, and downward calculation of the local expansion coefficients. The software modules and their compositions are also organized according to the stages they are used. Demonstration files and makefiles for merging the user routines and the library routines are provided. Restrictions: Accuracy requirement is described in terms of three or six digits. Higher multiples of three digits will be allowed in a later version. Finer decimation in digits for accuracy specification may or may not be necessary. Unusual features: Ready and friendly for customized use and instrumental in expression of concurrency and dependency for efficient parallelization. Running time: The running time depends linearly on the number N of particles, and varies with the distribution characteristics of the particle distribution. It also depends on the accuracy requirement, a higher accuracy requirement takes relatively longer time. The code outperforms the direct summation method when N⩾750.

  13. Comparison of methods for individualized astronaut organ dosimetry: Morphometry-based phantom library versus body contour autoscaling of a reference phantom

    NASA Astrophysics Data System (ADS)

    Sands, Michelle M.; Borrego, David; Maynard, Matthew R.; Bahadori, Amir A.; Bolch, Wesley E.

    2017-11-01

    One of the hazards faced by space crew members in low-Earth orbit or in deep space is exposure to ionizing radiation. It has been shown previously that while differences in organ-specific and whole-body risk estimates due to body size variations are small for highly-penetrating galactic cosmic rays, large differences in these quantities can result from exposure to shorter-range trapped proton or solar particle event radiations. For this reason, it is desirable to use morphometrically accurate computational phantoms representing each astronaut for a risk analysis, especially in the case of a solar particle event. An algorithm was developed to automatically sculpt and scale the UF adult male and adult female hybrid reference phantom to the individual outer body contour of a given astronaut. This process begins with the creation of a laser-measured polygon mesh model of the astronaut's body contour. Using the auto-scaling program and selecting several anatomical landmarks, the UF adult male or female phantom is adjusted to match the laser-measured outer body contour of the astronaut. A dosimetry comparison study was conducted to compare the organ dose accuracy of both the autoscaled phantom and that based upon a height-weight matched phantom from the UF/NCI Computational Phantom Library. Monte Carlo methods were used to simulate the environment of the August 1972 and February 1956 solar particle events. Using a series of individual-specific voxel phantoms as a local benchmark standard, autoscaled phantom organ dose estimates were shown to provide a 1% and 10% improvement in organ dose accuracy for a population of females and males, respectively, as compared to organ doses derived from height-weight matched phantoms from the UF/NCI Computational Phantom Library. In addition, this slight improvement in organ dose accuracy from the autoscaled phantoms is accompanied by reduced computer storage requirements and a more rapid method for individualized phantom generation when compared to the UF/NCI Computational Phantom Library.

  14. Improved accuracy in Wigner-Ville distribution-based sizing of rod-shaped particle using flip and replication technique

    NASA Astrophysics Data System (ADS)

    Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki

    2018-01-01

    A method for improving accuracy in Wigner-Ville distribution (WVD)-based particle size measurements from inline holograms using flip and replication technique (FRT) is proposed. The FRT extends the length of hologram signals being analyzed, yielding better spatial-frequency resolution of the WVD output. Experimental results verify reduction in measurement error as the length of the hologram signals increases. The proposed method is suitable for particle sizing from holograms recorded using small-sized image sensors.

  15. Direct Numerical Simulation of dense particle-laden turbulent flows using immersed boundaries

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Desjardins, Olivier

    2009-11-01

    Dense particle-laden turbulent flows play an important role in many engineering applications, ranging from pharmaceutical coating and chemical synthesis to fluidized bed reactors. Because of the complexity of the physics involved in these flows, current computational models for gas-particle processes, such as drag and heat transfer, rely on empirical correlations and have been shown to lack accuracy. In this work, direct numerical simulations (DNS) of dense particle-laden flows are conducted, using immersed boundaries (IB) to resolve the flow around each particle. First, the accuracy of the proposed approach is tested on a range of 2D and 3D flows at various Reynolds numbers, and resolution requirements are discussed. Then, various particle arrangements and number densities are simulated, the impact on particle wake interaction is assessed, and existing drag models are evaluated in the case of fixed particles. In addition, the impact of the particles on turbulence dissipation is investigated. Finally, a strategy for handling moving and colliding particles is discussed.

  16. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata

    PubMed Central

    Liu, Aiming; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi

    2017-01-01

    Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain–computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain–computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain–computer interface systems. PMID:29117100

  17. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata.

    PubMed

    Liu, Aiming; Chen, Kun; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi

    2017-11-08

    Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain-computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain-computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain-computer interface systems.

  18. Thickness of the particle swarm in cosmic ray air showers

    NASA Technical Reports Server (NTRS)

    Linsley, J.

    1985-01-01

    The average dispersion in arrival time of air shower particles detected with a scintillator at an impact parameter r is described with accuracy 5-10% by the empirical formula sigma = Sigma sub to (1+r/r sub t) sup b, where Sigma sub to = 2.6 ns, r sub t = 30m and b = (1.94 + or - .08) (0.39 + or - .06) sec Theta, for r 2 km, 10 to the 8th power E 10 to the 11th power GeV, and Theta 60 deg. (E is the primary energy and theta is the zenith angle). The amount of fluctuation in sigma sub t due to fluctuations in the level of origin and shower development is less than 20%. These results provide a basis for estimating the impact parameters of very larger showers with data from very small detector arrays (mini-arrays). The energy of such showers can then be estimated from the local particle density. The formula also provides a basis for estimating the angular resolution of air shower array-telescopes.

  19. Charged Particle Flux Sensor

    NASA Technical Reports Server (NTRS)

    Gregory, D. A.; Stocks, C. D.

    1983-01-01

    Improved version of Faraday cup increases accuracy of measurements of flux density of charged particles incident along axis through collection aperture. Geometry of cone-and-sensing cup combination assures most particles are trapped.

  20. Lifting degeneracy in holographic characterization of colloidal particles using multi-color imaging.

    PubMed

    Ruffner, David B; Cheong, Fook Chiong; Blusewicz, Jaroslaw M; Philips, Laura A

    2018-05-14

    Micrometer sized particles can be accurately characterized using holographic video microscopy and Lorenz-Mie fitting. In this work, we explore some of the limitations in holographic microscopy and introduce methods for increasing the accuracy of this technique with the use of multiple wavelengths of laser illumination. Large high index particle holograms have near degenerate solutions that can confuse standard fitting algorithms. Using a model based on diffraction from a phase disk, we explain the source of these degeneracies. We introduce multiple color holography as an effective approach to distinguish between degenerate solutions and provide improved accuracy for the holographic analysis of sub-visible colloidal particles.

  1. Load forecast method of electric vehicle charging station using SVR based on GA-PSO

    NASA Astrophysics Data System (ADS)

    Lu, Kuan; Sun, Wenxue; Ma, Changhui; Yang, Shenquan; Zhu, Zijian; Zhao, Pengfei; Zhao, Xin; Xu, Nan

    2017-06-01

    This paper presents a Support Vector Regression (SVR) method for electric vehicle (EV) charging station load forecast based on genetic algorithm (GA) and particle swarm optimization (PSO). Fuzzy C-Means (FCM) clustering is used to establish similar day samples. GA is used for global parameter searching and PSO is used for a more accurately local searching. Load forecast is then regressed using SVR. The practical load data of an EV charging station were taken to illustrate the proposed method. The result indicates an obvious improvement in the forecasting accuracy compared with SVRs based on PSO and GA exclusively.

  2. Benchmark Results Of Active Tracer Particles In The Open Souce Code ASPECT For Modelling Convection In The Earth's Mantle

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Kaloti, A. P.; Levinson, H. R.; Nguyen, N.; Puckett, E. G.; Lokavarapu, H. V.

    2016-12-01

    We present the results of three standard benchmarks for the new active tracer particle algorithm in ASPECT. The three benchmarks are SolKz, SolCx, and SolVI (also known as the 'inclusion benchmark') first proposed by Duretz, May, Gerya, and Tackley (G Cubed, 2011) and in subsequent work by Theilman, May, and Kaus (Pure and Applied Geophysics, 2014). Each of the three benchmarks compares the accuracy of the numerical solution to a steady (time-independent) solution of the incompressible Stokes equations with a known exact solution. These benchmarks are specifically designed to test the accuracy and effectiveness of the numerical method when the viscosity varies up to six orders of magnitude. ASPECT has been shown to converge to the exact solution of each of these benchmarks at the correct design rate when all of the flow variables, including the density and viscosity, are discretized on the underlying finite element grid (Krobichler, Heister, and Bangerth, GJI, 2012). In our work we discretize the density and viscosity by initially placing the true values of the density and viscosity at the intial particle positions. At each time step, including the initialization step, the density and viscosity are interpolated from the particles onto the finite element grid. The resulting Stokes system is solved for the velocity and pressure, and the particle positions are advanced in time according to this new, numerical, velocity field. Note that this procedure effectively changes a steady solution of the Stokes equaton (i.e., one that is independent of time) to a solution of the Stokes equations that is time dependent. Furthermore, the accuracy of the active tracer particle algorithm now also depends on the accuracy of the interpolation algorithm and of the numerical method one uses to advance the particle positions in time. Finally, we will present new interpolation algorithms designed to increase the overall accuracy of the active tracer algorithms in ASPECT and interpolation algotithms designed to conserve properties, such as mass density, that are being carried by the particles.

  3. Laser heating tunability by off-resonant irradiation of gold nanoparticles.

    PubMed

    Hormeño, Silvia; Gregorio-Godoy, Paula; Pérez-Juste, Jorge; Liz-Marzán, Luis M; Juárez, Beatriz H; Arias-Gonzalez, J Ricardo

    2014-01-29

    Temperature changes in the vicinity of a single absorptive nanostructure caused by local heating have strong implications in technologies such as integrated electronics or biomedicine. Herein, the temperature changes in the vicinity of a single optically trapped spherical Au nanoparticle encapsulated in a thermo-responsive poly(N-isopropylacrylamide) shell (Au@pNIPAM) are studied in detail. Individual beads are trapped in a counter-propagating optical tweezers setup at various laser powers, which allows the overall particle size to be tuned through the phase transition of the thermo-responsive shell. The experimentally obtained sizes measured at different irradiation powers are compared with average size values obtained by dynamic light scattering (DLS) from an ensemble of beads at different temperatures. The size range and the tendency to shrink upon increasing the laser power in the optical trap or by increasing the temperature for DLS agree with reasonable accuracy for both approaches. Discrepancies are evaluated by means of simple models accounting for variations in the thermal conductivity of the polymer, the viscosity of the aqueous solution and the absorption cross section of the coated Au nanoparticle. These results show that these parameters must be taken into account when considering local laser heating experiments in aqueous solution at the nanoscale. Analysis of the stability of the Au@pNIPAM particles in the trap is also theoretically carried out for different particle sizes. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Development of an unresolved CFD-DEM model for the flow of viscous suspensions and its application to solid-liquid mixing

    NASA Astrophysics Data System (ADS)

    Blais, Bruno; Lassaigne, Manon; Goniva, Christoph; Fradette, Louis; Bertrand, François

    2016-08-01

    Although viscous solid-liquid mixing plays a key role in the industry, the vast majority of the literature on the mixing of suspensions is centered around the turbulent regime of operation. However, the laminar and transitional regimes face considerable challenges. In particular, it is important to know the minimum impeller speed (Njs) that guarantees the suspension of all particles. In addition, local information on the flow patterns is necessary to evaluate the quality of mixing and identify the presence of dead zones. Multiphase computational fluid dynamics (CFD) is a powerful tool that can be used to gain insight into local and macroscopic properties of mixing processes. Among the variety of numerical models available in the literature, which are reviewed in this work, unresolved CFD-DEM, which combines CFD for the fluid phase with the discrete element method (DEM) for the solid particles, is an interesting approach due to its accurate prediction of the granular dynamics and its capability to simulate large amounts of particles. In this work, the unresolved CFD-DEM method is extended to viscous solid-liquid flows. Different solid-liquid momentum coupling strategies, along with their stability criteria, are investigated and their accuracies are compared. Furthermore, it is shown that an additional sub-grid viscosity model is necessary to ensure the correct rheology of the suspensions. The proposed model is used to study solid-liquid mixing in a stirred tank equipped with a pitched blade turbine. It is validated qualitatively by comparing the particle distribution against experimental observations, and quantitatively by compairing the fraction of suspended solids with results obtained via the pressure gauge technique.

  5. Transport calculations and accelerator experiments needed for radiation risk assessment in space.

    PubMed

    Sihver, Lembit

    2008-01-01

    The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.

  6. Biomass burning influences on atmospheric composition: A case study to assess the impact of aerosol data assimilation

    NASA Astrophysics Data System (ADS)

    Keslake, Tim; Chipperfield, Martyn; Mann, Graham; Flemming, Johannes; Remy, Sam; Dhomse, Sandip; Morgan, Will

    2016-04-01

    The C-IFS (Composition Integrated Forecast System) developed under the MACC series of projects and to be continued under the Copernicus Atmospheric Monitoring System, provides global operational forecasts and re-analyses of atmospheric composition at high spatial resolution (T255, ~80km). Currently there are 2 aerosol schemes implemented within C-IFS, a mass-based scheme with externally mixed particle types and an aerosol microphysics scheme (GLOMAP-mode). The simpler mass-based scheme is the current operational system, also used in the existing system to assimilate satellite measurements of aerosol optical depth (AOD) for improved forecast capability. The microphysical GLOMAP scheme has now been implemented and evaluated in the latest C-IFS cycle alongside the mass-based scheme. The upgrade to the microphysical scheme provides for higher fidelity aerosol-radiation and aerosol-cloud interactions, accounting for global variations in size distribution and mixing state, and additional aerosol properties such as cloud condensation nuclei concentrations. The new scheme will also provide increased aerosol information when used as lateral boundary conditions for regional air quality models. Here we present a series of experiments highlighting the influence and accuracy of the two different aerosol schemes and the impact of MODIS AOD assimilation. In particular, we focus on the influence of biomass burning emissions on aerosol properties in the Amazon, comparing to ground-based and aircraft observations from the 2012 SAMBBA campaign. Biomass burning can affect regional air quality, human health, regional weather and the local energy budget. Tropical biomass burning generates particles primarily composed of particulate organic matter (POM) and black carbon (BC), the local ratio of these two different constituents often determining the properties and subsequent impacts of the aerosol particles. Therefore, the model's ability to capture the concentrations of these two carbonaceous aerosol types, during the tropical dry season, is essential for quantifying these wide ranging impacts. Comparisons to SAMBBA aircraft observations show that while both schemes underestimate POM and BC mass concentrations, the GLOMAP scheme provides a more accurate simulation. When satellite AOD is assimilated into the GEMS-AER scheme, the model is successfully adjusted, capturing observed mass concentrations to a good degree of accuracy.

  7. On improving the algorithm efficiency in the particle-particle force calculations

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2016-09-01

    The problem of calculating inter-particle forces in the particle-particle (PP) simulation models takes an important place in scientific computing. Such simulation models are used in diverse scientific applications arising in astrophysics, plasma physics, particle accelerators, etc., where the long-range forces are considered. The inverse-square laws such as Coulomb's law of electrostatic forces and Newton's law of universal gravitation are the examples of laws pertaining to the long-range forces. The standard naïve PP method outlined, for example, by Hockney and Eastwood [1] is straightforward, processing all pairs of particles in a double nested loop. The PP algorithm provides the best accuracy of all possible methods, but its computational complexity is O (Np2), where Np is a total number of particles involved. Too low efficiency of the PP algorithm seems to be the challenging issue in some cases where the high accuracy is required. An example can be taken from the charged particle beam dynamics where, under computing the own space charge of the beam, so-called macro-particles are used (see e.g., Humphries Jr. [2], Kozynchenko and Svistunov [3]).

  8. Development of a real-time internal and external marker tracking system for particle therapy: a phantom study using patient tumor trajectory data

    PubMed Central

    Cho, Junsang; Cheon, Wonjoong; Ahn, Sanghee; Jung, Hyunuk; Sheen, Heesoon; Park, Hee Chul

    2017-01-01

    Abstract Target motion–induced uncertainty in particle therapy is more complicated than that in X-ray therapy, requiring more accurate motion management. Therefore, a hybrid motion-tracking system that can track internal tumor motion and as well as an external surrogate of tumor motion was developed. Recently, many correlation tests between internal and external markers in X-ray therapy have been developed; however, the accuracy of such internal/external marker tracking systems, especially in particle therapy, has not yet been sufficiently tested. In this article, the process of installing an in-house hybrid internal/external motion-tracking system is described and the accuracy level of tracking system was acquired. Our results demonstrated that the developed in-house external/internal combined tracking system has submillimeter accuracy, and can be clinically used as a particle therapy system as well as a simulation system for moving tumor treatment. PMID:28201522

  9. Comparison of Phase-Based 3D Near-Field Source Localization Techniques for UHF RFID.

    PubMed

    Parr, Andreas; Miesen, Robert; Vossiek, Martin

    2016-06-25

    In this paper, we present multiple techniques for phase-based narrowband backscatter tag localization in three-dimensional space with planar antenna arrays or synthetic apertures. Beamformer and MUSIC localization algorithms, known from near-field source localization and direction-of-arrival estimation, are applied to the 3D backscatter scenario and their performance in terms of localization accuracy is evaluated. We discuss the impact of different transceiver modes known from the literature, which evaluate different send and receive antenna path combinations for a single localization, as in multiple input multiple output (MIMO) systems. Furthermore, we propose a new Singledimensional-MIMO (S-MIMO) transceiver mode, which is especially suited for use with mobile robot systems. Monte-Carlo simulations based on a realistic multipath error model ensure spatial correlation of the simulated signals, and serve to critically appraise the accuracies of the different localization approaches. A synthetic uniform rectangular array created by a robotic arm is used to evaluate selected localization techniques. We use an Ultra High Frequency (UHF) Radiofrequency Identification (RFID) setup to compare measurements with the theory and simulation. The results show how a mean localization accuracy of less than 30 cm can be reached in an indoor environment. Further simulations demonstrate how the distance between aperture and tag affects the localization accuracy and how the size and grid spacing of the rectangular array need to be adapted to improve the localization accuracy down to orders of magnitude in the centimeter range, and to maximize array efficiency in terms of localization accuracy per number of elements.

  10. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    PubMed Central

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  11. Multidimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2015-09-01

    We discuss a new, conservative, fully implicit 2D-3V particle-in-cell algorithm for non-radiative, electromagnetic kinetic plasma simulations, based on the Vlasov-Darwin model. Unlike earlier linearly implicit PIC schemes and standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. This has been demonstrated in 1D electrostatic and electromagnetic contexts. In this study, we build on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the Darwin field and particle orbit equations for multiple species in multiple dimensions. The Vlasov-Darwin model is very attractive for PIC simulations because it avoids radiative noise issues in non-radiative electromagnetic regimes. The algorithm conserves global energy, local charge, and particle canonical-momentum exactly, even with grid packing. The nonlinear iteration is effectively accelerated with a fluid preconditioner, which allows efficient use of large timesteps, O(√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D and 2D. Support from the LANL LDRD program and the DOE-SC ASCR office.

  12. Local shear stress and its correlation with local volume fraction in concentrated non-Brownian suspensions: Lattice Boltzmann simulation

    NASA Astrophysics Data System (ADS)

    Lee, Young Ki; Ahn, Kyung Hyun; Lee, Seung Jong

    2014-12-01

    The local shear stress of non-Brownian suspensions was investigated using the lattice Boltzmann method coupled with the smoothed profile method. Previous studies have only focused on the bulk rheology of complex fluids because the local rheology of complex fluids was not accessible due to technical limitations. In this study, the local shear stress of two-dimensional solid particle suspensions in Couette flow was investigated with the method of planes to correlate non-Newtonian fluid behavior with the structural evolution of concentrated particle suspensions. Shear thickening was successfully captured for highly concentrated suspensions at high particle Reynolds number, and both the local rheology and local structure of the suspensions were analyzed. It was also found that the linear correlation between the local particle stress and local particle volume fraction was dramatically reduced during shear thickening. These results clearly show how the change in local structure of suspensions influences the local and bulk rheology of the suspensions.

  13. A novel combined SLAM based on RBPF-SLAM and EIF-SLAM for mobile system sensing in a large scale environment.

    PubMed

    He, Bo; Zhang, Shujing; Yan, Tianhong; Zhang, Tao; Liang, Yan; Zhang, Hongjin

    2011-01-01

    Mobile autonomous systems are very important for marine scientific investigation and military applications. Many algorithms have been studied to deal with the computational efficiency problem required for large scale simultaneous localization and mapping (SLAM) and its related accuracy and consistency. Among these methods, submap-based SLAM is a more effective one. By combining the strength of two popular mapping algorithms, the Rao-Blackwellised particle filter (RBPF) and extended information filter (EIF), this paper presents a combined SLAM-an efficient submap-based solution to the SLAM problem in a large scale environment. RBPF-SLAM is used to produce local maps, which are periodically fused into an EIF-SLAM algorithm. RBPF-SLAM can avoid linearization of the robot model during operating and provide a robust data association, while EIF-SLAM can improve the whole computational speed, and avoid the tendency of RBPF-SLAM to be over-confident. In order to further improve the computational speed in a real time environment, a binary-tree-based decision-making strategy is introduced. Simulation experiments show that the proposed combined SLAM algorithm significantly outperforms currently existing algorithms in terms of accuracy and consistency, as well as the computing efficiency. Finally, the combined SLAM algorithm is experimentally validated in a real environment by using the Victoria Park dataset.

  14. INTEGRATION OF PARTICLE-GAS SYSTEMS WITH STIFF MUTUAL DRAG INTERACTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chao-Chin; Johansen, Anders, E-mail: ccyang@astro.lu.se, E-mail: anders@astro.lu.se

    2016-06-01

    Numerical simulation of numerous mm/cm-sized particles embedded in a gaseous disk has become an important tool in the study of planet formation and in understanding the dust distribution in observed protoplanetary disks. However, the mutual drag force between the gas and the particles can become so stiff—particularly because of small particles and/or strong local solid concentration—that an explicit integration of this system is computationally formidable. In this work, we consider the integration of the mutual drag force in a system of Eulerian gas and Lagrangian solid particles. Despite the entanglement between the gas and the particles under the particle-mesh construct,more » we are able to devise a numerical algorithm that effectively decomposes the globally coupled system of equations for the mutual drag force, and makes it possible to integrate this system on a cell-by-cell basis, which considerably reduces the computational task required. We use an analytical solution for the temporal evolution of each cell to relieve the time-step constraint posed by the mutual drag force, as well as to achieve the highest degree of accuracy. To validate our algorithm, we use an extensive suite of benchmarks with known solutions in one, two, and three dimensions, including the linear growth and the nonlinear saturation of the streaming instability. We demonstrate numerical convergence and satisfactory consistency in all cases. Our algorithm can, for example, be applied to model the evolution of the streaming instability with mm/cm-sized pebbles at high mass loading, which has important consequences for the formation scenarios of planetesimals.« less

  15. Relativistic distribution function for particles with spin at local thermodynamical equilibrium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becattini, F., E-mail: becattini@fi.infn.it; INFN Sezione di Firenze, Florence; Universität Frankfurt, Frankfurt am Main

    2013-11-15

    We present an extension of relativistic single-particle distribution function for weakly interacting particles at local thermodynamical equilibrium including spin degrees of freedom, for massive spin 1/2 particles. We infer, on the basis of the global equilibrium case, that at local thermodynamical equilibrium particles acquire a net polarization proportional to the vorticity of the inverse temperature four-vector field. The obtained formula for polarization also implies that a steady gradient of temperature entails a polarization orthogonal to particle momentum. The single-particle distribution function in momentum space extends the so-called Cooper–Frye formula to particles with spin 1/2 and allows us to predict theirmore » polarization in relativistic heavy ion collisions at the freeze-out. -- Highlights: •Single-particle distribution function in local thermodynamical equilibrium with spin. •Polarization of spin 1/2 particles in a fluid at local thermodynamical equilibrium. •Prediction of a new effect: a steady gradient of temperature induces a polarization. •Application to the calculation of polarization in relativistic heavy ion collisions.« less

  16. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China

    PubMed Central

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-01-01

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430

  17. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China.

    PubMed

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-05-11

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.

  18. Motion of particles with inertia in a compressible free shear layer

    NASA Technical Reports Server (NTRS)

    Samimy, M.; Lele, S. K.

    1991-01-01

    The effects of the inertia of a particle on its flow-tracking accuracy and particle dispersion are studied using direct numerical simulations of 2D compressible free shear layers in convective Mach number (Mc) range of 0.2 to 0.6. The results show that particle response is well characterized by tau, the ratio of particle response time to the flow time scales (Stokes' number). The slip between particle and fluid imposes a fundamental limit on the accuracy of optical measurements such as LDV and PIV. The error is found to grow like tau up to tau = 1 and taper off at higher tau. For tau = 0.2 the error is about 2 percent. In the flow visualizations based on Mie scattering, particles with tau more than 0.05 are found to grossly misrepresent the flow features. These errors are quantified by calculating the dispersion of particles relative to the fluid. Overall, the effect of compressibility does not seem to be significant on the motion of particles in the range of Mc considered here.

  19. Fabrication and evaluation of plasmonic light-emitting diodes with thin p-type layer and localized Ag particles embedded by ITO

    NASA Astrophysics Data System (ADS)

    Okada, N.; Morishita, N.; Mori, A.; Tsukada, T.; Tateishi, K.; Okamoto, K.; Tadatomo, K.

    2017-04-01

    Light-emitting diodes (LEDs) have been demonstrated with a thin p-type layer using the plasmonic effect. Optimal LED device operation was found when using a 20-nm-thick p+-GaN layer. Ag of different thicknesses was deposited on the thin p-type layer and annealed to form the localized Ag particles. The localized Ag particles were embedded by indium tin oxide to form a p-type electrode in the LED structure. By optimization of the plasmonic LED, the significant electroluminescence enhancement was observed when the thickness of Ag was 9.5 nm. Both upward and downward electroluminescence intensities were improved, and the external quantum efficiency was approximately double that of LEDs without the localized Ag particles. The time-resolved photoluminescence (PL) decay time for the LED with the localized Ag particles was shorter than that without the localized Ag particles. The faster PL decay time should cause the increase in internal quantum efficiency by adopting the localized Ag particles. To validate the localized surface plasmon resonance coupling effect, the absorption of the LEDs was investigated experimentally and using simulations.

  20. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion.

    PubMed

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian

    2015-04-01

    Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor's) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. The experimental results demonstrate that the authors' proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors' framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm optimization method with using the current observation information and adaptive evolutionary factors. The authors proposed framework greatly reduced the guidance errors from (4.3, 7.8) to (3.0 mm, 5.6°), compared to state-of-the-art methods.

  1. Surrogate-driven deformable motion model for organ motion tracking in particle radiation therapy

    NASA Astrophysics Data System (ADS)

    Fassi, Aurora; Seregni, Matteo; Riboldi, Marco; Cerveri, Pietro; Sarrut, David; Battista Ivaldi, Giovanni; Tabarelli de Fatis, Paola; Liotta, Marco; Baroni, Guido

    2015-02-01

    The aim of this study is the development and experimental testing of a tumor tracking method for particle radiation therapy, providing the daily respiratory dynamics of the patient’s thoraco-abdominal anatomy as a function of an external surface surrogate combined with an a priori motion model. The proposed tracking approach is based on a patient-specific breathing motion model, estimated from the four-dimensional (4D) planning computed tomography (CT) through deformable image registration. The model is adapted to the interfraction baseline variations in the patient’s anatomical configuration. The driving amplitude and phase parameters are obtained intrafractionally from a respiratory surrogate signal derived from the external surface displacement. The developed technique was assessed on a dataset of seven lung cancer patients, who underwent two repeated 4D CT scans. The first 4D CT was used to build the respiratory motion model, which was tested on the second scan. The geometric accuracy in localizing lung lesions, mediated over all breathing phases, ranged between 0.6 and 1.7 mm across all patients. Errors in tracking the surrounding organs at risk, such as lungs, trachea and esophagus, were lower than 1.3 mm on average. The median absolute variation in water equivalent path length (WEL) within the target volume did not exceed 1.9 mm-WEL for simulated particle beams. A significant improvement was achieved compared with error compensation based on standard rigid alignment. The present work can be regarded as a feasibility study for the potential extension of tumor tracking techniques in particle treatments. Differently from current tracking methods applied in conventional radiotherapy, the proposed approach allows for the dynamic localization of all anatomical structures scanned in the planning CT, thus providing complete information on density and WEL variations required for particle beam range adaptation.

  2. Discrimination of Mediterranean mussel (Mytilus galloprovincialis) feces in deposited materials by fecal morphology.

    PubMed

    Akiyama, Yoshihiro B; Iseri, Erina; Kataoka, Tomoya; Tanaka, Makiko; Katsukoshi, Kiyonori; Moki, Hirotada; Naito, Ryoji; Hem, Ramrav; Okada, Tomonari

    2017-02-15

    In the present study, we determined the common morphological characteristics of the feces of Mytilus galloprovincialis to develop a method for visually discriminating the feces of this mussel in deposited materials. This method can be used to assess the effect of mussel feces on benthic environments. The accuracy of visual morphology-based discrimination of mussel feces in deposited materials was confirmed by DNA analysis. Eighty-nine percent of mussel feces shared five common morphological characteristics. Of the 372 animal species investigated, only four species shared all five of these characteristics. More than 96% of the samples were visually identified as M. galloprovincialis feces on the basis of morphology of the particles containing the appropriate mitochondrial DNA. These results suggest that mussel feces can be discriminated with high accuracy on the basis of their morphological characteristics. Thus, our method can be used to quantitatively assess the effect of mussel feces on local benthic environments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions

    DOE PAGES

    Chacon, L.; Chen, G.

    2016-04-19

    Here, we extend a recently proposed fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (Φ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ • A = 0 exactly. Anmore » asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.« less

  4. A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Chen, G.

    2016-07-01

    We extend a recently proposed fully implicit PIC algorithm for the Vlasov-Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (ϕ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ ṡ A = 0 exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.

  5. A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, L.; Chen, G.

    Here, we extend a recently proposed fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (Φ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ • A = 0 exactly. Anmore » asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.« less

  6. Improved Fuzzy K-Nearest Neighbor Using Modified Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Jamaluddin; Siringoringo, Rimbun

    2017-12-01

    Fuzzy k-Nearest Neighbor (FkNN) is one of the most powerful classification methods. The presence of fuzzy concepts in this method successfully improves its performance on almost all classification issues. The main drawbackof FKNN is that it is difficult to determine the parameters. These parameters are the number of neighbors (k) and fuzzy strength (m). Both parameters are very sensitive. This makes it difficult to determine the values of ‘m’ and ‘k’, thus making FKNN difficult to control because no theories or guides can deduce how proper ‘m’ and ‘k’ should be. This study uses Modified Particle Swarm Optimization (MPSO) to determine the best value of ‘k’ and ‘m’. MPSO is focused on the Constriction Factor Method. Constriction Factor Method is an improvement of PSO in order to avoid local circumstances optima. The model proposed in this study was tested on the German Credit Dataset. The test of the data/The data test has been standardized by UCI Machine Learning Repository which is widely applied to classification problems. The application of MPSO to the determination of FKNN parameters is expected to increase the value of classification performance. Based on the experiments that have been done indicating that the model offered in this research results in a better classification performance compared to the Fk-NN model only. The model offered in this study has an accuracy rate of 81%, while. With using Fk-NN model, it has the accuracy of 70%. At the end is done comparison of research model superiority with 2 other classification models;such as Naive Bayes and Decision Tree. This research model has a better performance level, where Naive Bayes has accuracy 75%, and the decision tree model has 70%

  7. Experimental studies of high-accuracy RFID localization with channel impairments

    NASA Astrophysics Data System (ADS)

    Pauls, Eric; Zhang, Yimin D.

    2015-05-01

    Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.

  8. L-Mapping Solar Energetic Particles from LEO to High Altitudes at High Latitudes

    NASA Astrophysics Data System (ADS)

    Young, S. L.; Wilson, G.

    2017-12-01

    The current solar energetic particle (SEP) hazard specification is focused on geosynchronous orbit with some capability at LEO, but there is no specification for the large region between these orbital regimes. The L-mapping technique, which attempts to fill this capability gap, assumes that there is a simple relationship between magnetic L-shells and SEP penetration boundaries that can be exploited. A previous study compared POES observations that had been mapped to the Van Allen Probes with local observations. It found that more than 90% of the mapped and local fluxes were within a factor of four of each other; this is thought to be sucient for operational purposes. One concern with the previous study was the limited number of SEP events that have occurred during the Van Allen Probes mission. The current study examines the L-mapping method's accuracy at higher latitudes. Observations from a satellite that was launched into a HEO orbit with a 63° inclination before the peak of solar cycle 24 are compared to L-mapped POES observations. The larger number of events provides better statistics and the 63° orbit inclination allows us to examine the difference between mapping from POES to the magnetic equator, as in the previous study, and mapping from POES to higher latitudes.

  9. Blob-enhanced reconstruction technique

    NASA Astrophysics Data System (ADS)

    Castrillo, Giusy; Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso

    2016-09-01

    A method to enhance the quality of the tomographic reconstruction and, consequently, the 3D velocity measurement accuracy, is presented. The technique is based on integrating information on the objects to be reconstructed within the algebraic reconstruction process. A first guess intensity distribution is produced with a standard algebraic method, then the distribution is rebuilt as a sum of Gaussian blobs, based on location, intensity and size of agglomerates of light intensity surrounding local maxima. The blobs substitution regularizes the particle shape allowing a reduction of the particles discretization errors and of their elongation in the depth direction. The performances of the blob-enhanced reconstruction technique (BERT) are assessed with a 3D synthetic experiment. The results have been compared with those obtained by applying the standard camera simultaneous multiplicative reconstruction technique (CSMART) to the same volume. Several blob-enhanced reconstruction processes, both substituting the blobs at the end of the CSMART algorithm and during the iterations (i.e. using the blob-enhanced reconstruction as predictor for the following iterations), have been tested. The results confirm the enhancement in the velocity measurements accuracy, demonstrating a reduction of the bias error due to the ghost particles. The improvement is more remarkable at the largest tested seeding densities. Additionally, using the blobs distributions as a predictor enables further improvement of the convergence of the reconstruction algorithm, with the improvement being more considerable when substituting the blobs more than once during the process. The BERT process is also applied to multi resolution (MR) CSMART reconstructions, permitting simultaneously to achieve remarkable improvements in the flow field measurements and to benefit from the reduction in computational time due to the MR approach. Finally, BERT is also tested on experimental data, obtaining an increase of the signal-to-noise ratio in the reconstructed flow field and a higher value of the correlation factor in the velocity measurements with respect to the volume to which the particles are not replaced.

  10. Cryo-EM image alignment based on nonuniform fast Fourier transform.

    PubMed

    Yang, Zhengfan; Penczek, Pawel A

    2008-08-01

    In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.

  11. Cryo-EM Image Alignment Based on Nonuniform Fast Fourier Transform

    PubMed Central

    Yang, Zhengfan; Penczek, Pawel A.

    2008-01-01

    In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform Fast Fourier Transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis. PMID:18499351

  12. Tilt-Pair Analysis of Images from a Range of Different Specimens in Single-Particle Electron Cryomicroscopy

    PubMed Central

    Henderson, Richard; Chen, Shaoxia; Chen, James Z.; Grigorieff, Nikolaus; Passmore, Lori A.; Ciccarelli, Luciano; Rubinstein, John L.; Crowther, R. Anthony; Stewart, Phoebe L.; Rosenthal, Peter B.

    2011-01-01

    The comparison of a pair of electron microscope images recorded at different specimen tilt angles provides a powerful approach for evaluating the quality of images, image-processing procedures, or three-dimensional structures. Here, we analyze tilt-pair images recorded from a range of specimens with different symmetries and molecular masses and show how the analysis can produce valuable information not easily obtained otherwise. We show that the accuracy of orientation determination of individual single particles depends on molecular mass, as expected theoretically since the information in each particle image increases with molecular mass. The angular uncertainty is less than 1° for particles of high molecular mass (∼ 50 MDa), several degrees for particles in the range 1–5 MDa, and tens of degrees for particles below 1 MDa. Orientational uncertainty may be the major contributor to the effective temperature factor (B-factor) describing contrast loss and therefore the maximum resolution of a structure determination. We also made two unexpected observations. Single particles that are known to be flexible showed a wider spread in orientation accuracy, and the orientations of the largest particles examined changed by several degrees during typical low-dose exposures. Smaller particles presumably also reorient during the exposure; hence, specimen movement is a second major factor that limits resolution. Tilt pairs thus enable assessment of orientation accuracy, map quality, specimen motion, and conformational heterogeneity. A convincing tilt-pair parameter plot, where 60% of the particles show a single cluster around the expected tilt axis and tilt angle, provides confidence in a structure determined using electron cryomicroscopy. PMID:21939668

  13. Commissioning and quality assurance of an integrated system for patient positioning and setup verification in particle therapy.

    PubMed

    Pella, A; Riboldi, M; Tagaste, B; Bianculli, D; Desplanques, M; Fontana, G; Cerveri, P; Seregni, M; Fattori, G; Orecchia, R; Baroni, G

    2014-08-01

    In an increasing number of clinical indications, radiotherapy with accelerated particles shows relevant advantages when compared with high energy X-ray irradiation. However, due to the finite range of ions, particle therapy can be severely compromised by setup errors and geometric uncertainties. The purpose of this work is to describe the commissioning and the design of the quality assurance procedures for patient positioning and setup verification systems at the Italian National Center for Oncological Hadrontherapy (CNAO). The accuracy of systems installed in CNAO and devoted to patient positioning and setup verification have been assessed using a laser tracking device. The accuracy in calibration and image based setup verification relying on in room X-ray imaging system was also quantified. Quality assurance tests to check the integration among all patient setup systems were designed, and records of daily QA tests since the start of clinical operation (2011) are presented. The overall accuracy of the patient positioning system and the patient verification system motion was proved to be below 0.5 mm under all the examined conditions, with median values below the 0.3 mm threshold. Image based registration in phantom studies exhibited sub-millimetric accuracy in setup verification at both cranial and extra-cranial sites. The calibration residuals of the OTS were found consistent with the expectations, with peak values below 0.3 mm. Quality assurance tests, daily performed before clinical operation, confirm adequate integration and sub-millimetric setup accuracy. Robotic patient positioning was successfully integrated with optical tracking and stereoscopic X-ray verification for patient setup in particle therapy. Sub-millimetric setup accuracy was achieved and consistently verified in daily clinical operation.

  14. On the applicability of the standard approaches for evaluating a neoclassical radial electric field in a tokamak edge region

    DOE PAGES

    Dorf, M. A.; Cohen, R. H.; Simakov, A. N.; ...

    2013-08-27

    The use of the standard approaches for evaluating a neoclassical radial electric field E r, i.e., the Ampere (or gyro-Poisson) equation, requires accurate calculation of the difference between the gyroaveraged electron and ion particle fluxes (or densities). In the core of a tokamak, the nontrivial difference appears only in high-order corrections to a local Maxwellian distribution due to the intrinsic ambipolarity of particle transport. The evaluation of such high-order corrections may be inconsistent with the accuracy of the standard long wavelength gyrokinetic equation (GKE), thus imposing limitations on the applicability of the standard approaches. However, in the edge of amore » tokamak, charge-exchange collisions with neutrals and prompt ion orbit losses can drive non-intrinsically ambipolar particle fluxes for which a nontrivial (E r-dependent) difference between the electron and ion fluxes appears already in a low order and can be accurately predicted by the long wavelength GKE. As a result, the parameter regimes where the radial electric field dynamics in the tokamak edge region is dominated by the non-intrinsically ambipolar processes, thus allowing for the use of the standard approaches, are discussed.« less

  15. Particle Filter-Based Recursive Data Fusion With Sensor Indexing for Large Core Neutron Flux Estimation

    NASA Astrophysics Data System (ADS)

    Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol

    2017-06-01

    We introduce a sequential importance sampling particle filter (PF)-based multisensor multivariate nonlinear estimator for estimating the in-core neutron flux distribution for pressurized heavy water reactor core. Many critical applications such as reactor protection and control rely upon neutron flux information, and thus their reliability is of utmost importance. The point kinetic model based on neutron transport conveniently explains the dynamics of nuclear reactor. The neutron flux in the large core loosely coupled reactor is sensed by multiple sensors measuring point fluxes located at various locations inside the reactor core. The flux values are coupled to each other through diffusion equation. The coupling facilitates redundancy in the information. It is shown that multiple independent data about the localized flux can be fused together to enhance the estimation accuracy to a great extent. We also propose the sensor anomaly handling feature in multisensor PF to maintain the estimation process even when the sensor is faulty or generates data anomaly.

  16. Multistrategy Self-Organizing Map Learning for Classification Problems

    PubMed Central

    Hasan, S.; Shamsuddin, S. M.

    2011-01-01

    Multistrategy Learning of Self-Organizing Map (SOM) and Particle Swarm Optimization (PSO) is commonly implemented in clustering domain due to its capabilities in handling complex data characteristics. However, some of these multistrategy learning architectures have weaknesses such as slow convergence time always being trapped in the local minima. This paper proposes multistrategy learning of SOM lattice structure with Particle Swarm Optimisation which is called ESOMPSO for solving various classification problems. The enhancement of SOM lattice structure is implemented by introducing a new hexagon formulation for better mapping quality in data classification and labeling. The weights of the enhanced SOM are optimised using PSO to obtain better output quality. The proposed method has been tested on various standard datasets with substantial comparisons with existing SOM network and various distance measurement. The results show that our proposed method yields a promising result with better average accuracy and quantisation errors compared to the other methods as well as convincing significant test. PMID:21876686

  17. Gold Nanoparticle Quantitation by Whole Cell Tomography.

    PubMed

    Sanders, Aric W; Jeerage, Kavita M; Schwartz, Cindi L; Curtin, Alexandra E; Chiaramonti, Ann N

    2015-12-22

    Many proposed biomedical applications for engineered gold nanoparticles require their incorporation by mammalian cells in specific numbers and locations. Here, the number of gold nanoparticles inside of individual mammalian stem cells was characterized using fast focused ion beam-scanning electron microscopy based tomography. Enhanced optical microscopy was used to provide a multiscale map of the in vitro sample, which allows cells of interest to be identified within their local environment. Cells were then serially sectioned using a gallium ion beam and imaged using a scanning electron beam. To confirm the accuracy of single cross sections, nanoparticles in similar cross sections were imaged using transmission electron microscopy and scanning helium ion microscopy. Complete tomographic series were then used to count the nanoparticles inside of each cell and measure their spatial distribution. We investigated the influence of slice thickness on counting single particles and clusters as well as nanoparticle packing within clusters. For 60 nm citrate stabilized particles, the nanoparticle cluster packing volume is 2.15 ± 0.20 times the volume of the bare gold nanoparticles.

  18. In Situ Aerosol Detector

    NASA Technical Reports Server (NTRS)

    Vakhtin, Andrei; Krasnoperov, Lev

    2011-01-01

    An affordable technology designed to facilitate extensive global atmospheric aerosol measurements has been developed. This lightweight instrument is compatible with newly developed platforms such as tethered balloons, blimps, kites, and even disposable instruments such as dropsondes. This technology is based on detection of light scattered by aerosol particles where an optical layout is used to enhance the performance of the laboratory prototype instrument, which allows detection of smaller aerosol particles and improves the accuracy of aerosol particle size measurement. It has been determined that using focused illumination geometry without any apertures is advantageous over using the originally proposed collimated beam/slit geometry (that is supposed to produce uniform illumination over the beam cross-section). The illumination source is used more efficiently, which allows detection of smaller aerosol particles. Second, the obtained integral scattered light intensity measured for the particle can be corrected for the beam intensity profile inhomogeneity based on the measured beam intensity profile and measured particle location. The particle location (coordinates) in the illuminated sample volume is determined based on the information contained in the image frame. The procedure considerably improves the accuracy of determination of the aerosol particle size.

  19. Predicting airborne particle deposition by a modified Markov chain model for fast estimation of potential contaminant spread

    NASA Astrophysics Data System (ADS)

    Mei, Xiong; Gong, Guangcai

    2018-07-01

    As potential carriers of hazardous pollutants, airborne particles may deposit onto surfaces due to gravitational settling. A modified Markov chain model to predict gravity induced particle dispersion and deposition is proposed in the paper. The gravity force is considered as a dominant weighting factor to adjust the State Transfer Matrix, which represents the probabilities of the change of particle spatial distributions between consecutive time steps within an enclosure. The model performance has been further validated by particle deposition in a ventilation chamber and a horizontal turbulent duct flow in pre-existing literatures. Both the proportion of deposited particles and the dimensionless deposition velocity are adopted to characterize the validation results. Comparisons between our simulated results and the experimental data from literatures show reasonable accuracy. Moreover, it is also found that the dimensionless deposition velocity can be remarkably influenced by particle size and stream-wise velocity in a typical horizontal flow. This study indicates that the proposed model can predict the gravity-dominated airborne particle deposition with reasonable accuracy and acceptable computing time.

  20. Predictive accuracy of particle filtering in dynamic models supporting outbreak projections.

    PubMed

    Safarishahrbijari, Anahita; Teyhouee, Aydin; Waldner, Cheryl; Liu, Juxin; Osgood, Nathaniel D

    2017-09-26

    While a new generation of computational statistics algorithms and availability of data streams raises the potential for recurrently regrounding dynamic models with incoming observations, the effectiveness of such arrangements can be highly subject to specifics of the configuration (e.g., frequency of sampling and representation of behaviour change), and there has been little attempt to identify effective configurations. Combining dynamic models with particle filtering, we explored a solution focusing on creating quickly formulated models regrounded automatically and recurrently as new data becomes available. Given a latent underlying case count, we assumed that observed incident case counts followed a negative binomial distribution. In accordance with the condensation algorithm, each such observation led to updating of particle weights. We evaluated the effectiveness of various particle filtering configurations against each other and against an approach without particle filtering according to the accuracy of the model in predicting future prevalence, given data to a certain point and a norm-based discrepancy metric. We examined the effectiveness of particle filtering under varying times between observations, negative binomial dispersion parameters, and rates with which the contact rate could evolve. We observed that more frequent observations of empirical data yielded super-linearly improved accuracy in model predictions. We further found that for the data studied here, the most favourable assumptions to make regarding the parameters associated with the negative binomial distribution and changes in contact rate were robust across observation frequency and the observation point in the outbreak. Combining dynamic models with particle filtering can perform well in projecting future evolution of an outbreak. Most importantly, the remarkable improvements in predictive accuracy resulting from more frequent sampling suggest that investments to achieve efficient reporting mechanisms may be more than paid back by improved planning capacity. The robustness of the results on particle filter configuration in this case study suggests that it may be possible to formulate effective standard guidelines and regularized approaches for such techniques in particular epidemiological contexts. Most importantly, the work tentatively suggests potential for health decision makers to secure strong guidance when anticipating outbreak evolution for emerging infectious diseases by combining even very rough models with particle filtering method.

  1. PENTACLE: Parallelized particle-particle particle-tree code for planet formation

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori

    2017-10-01

    We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.

  2. An Integrated Approach to Indoor and Outdoor Localization

    DTIC Science & Technology

    2017-04-17

    localization estimate, followed by particle filter based tracking. Initial localization is performed using WiFi and image observations. For tracking we...source. A two-step process is proposed that performs an initial localization es-timate, followed by particle filter based t racking. Initial...mapped, it is possible to use them for localization [20, 21, 22]. Haverinen et al. show that these fields could be used with a particle filter to

  3. AGILE as a particle detector: Magnetospheric measurements of 10-100 MeV electrons in L shells less than 1.2

    NASA Astrophysics Data System (ADS)

    Argan, A.; Piano, G.; Tavani, M.; Trois, A.

    2016-04-01

    We study the capability of the AGILE gamma ray space mission in detecting magnetospheric particles (mostly electrons) in the energy range 10-100 MeV. Our measurements focus on the inner magnetic shells with L ≲ 1.2 in the magnetic equator. The instrument characteristics and a quasi-equatorial orbit of ˜500 km altitude make it possible to address several important properties of the particle populations in the inner magnetosphere. We review the on board trigger logic and study the acceptance of the AGILE instrument for particle detection. We find that the AGILE effective geometric factor (acceptance) is R≃50 cm2 sr for particle energies in the range 10-100 MeV. Particle event reconstruction allows to determine the particle pitch angle with the local magnetic field with good accuracy. We obtain the pitch angle distributions for both the AGILE "pointing" phase (July 2007 to October 2009) and the "spinning" phase (November 2009 to present). In spinning mode, the whole range (0-180 degrees) is accessible every 7 min. We find a pitch angle distribution of the "dumbbell" type with a prominent depression near α = 90° which is typical of wave-particle resonant scattering and precipitation in the inner magnetosphere. Most importantly, we show that AGILE is not affected by solar particle precipitation events in the magnetosphere. The satellite trajectory intersects magnetic shells in a quite narrow range (1.0 ≲ L ≲ 1.2); AGILE then has a high exposure to a magnetospheric region potentially rich of interesting phenomena. The large particle acceptance in the 10-100 MeV range, the pitch angle determination capability, the L shell exposure, and the solar-free background make AGILE a unique instrument for measuring steady and transient particle events in the inner magnetosphere.

  4. Development of a real-time internal and external marker tracking system for particle therapy: a phantom study using patient tumor trajectory data.

    PubMed

    Cho, Junsang; Cheon, Wonjoong; Ahn, Sanghee; Jung, Hyunuk; Sheen, Heesoon; Park, Hee Chul; Han, Youngyih

    2017-09-01

    Target motion-induced uncertainty in particle therapy is more complicated than that in X-ray therapy, requiring more accurate motion management. Therefore, a hybrid motion-tracking system that can track internal tumor motion and as well as an external surrogate of tumor motion was developed. Recently, many correlation tests between internal and external markers in X-ray therapy have been developed; however, the accuracy of such internal/external marker tracking systems, especially in particle therapy, has not yet been sufficiently tested. In this article, the process of installing an in-house hybrid internal/external motion-tracking system is described and the accuracy level of tracking system was acquired. Our results demonstrated that the developed in-house external/internal combined tracking system has submillimeter accuracy, and can be clinically used as a particle therapy system as well as a simulation system for moving tumor treatment. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  5. Nonlinear aeroacoustic characterization of Helmholtz resonators with a local-linear neuro-fuzzy network model

    NASA Astrophysics Data System (ADS)

    Förner, K.; Polifke, W.

    2017-10-01

    The nonlinear acoustic behavior of Helmholtz resonators is characterized by a data-based reduced-order model, which is obtained by a combination of high-resolution CFD simulation and system identification. It is shown that even in the nonlinear regime, a linear model is capable of describing the reflection behavior at a particular amplitude with quantitative accuracy. This observation motivates to choose a local-linear model structure for this study, which consists of a network of parallel linear submodels. A so-called fuzzy-neuron layer distributes the input signal over the linear submodels, depending on the root mean square of the particle velocity at the resonator surface. The resulting model structure is referred to as an local-linear neuro-fuzzy network. System identification techniques are used to estimate the free parameters of this model from training data. The training data are generated by CFD simulations of the resonator, with persistent acoustic excitation over a wide range of frequencies and sound pressure levels. The estimated nonlinear, reduced-order models show good agreement with CFD and experimental data over a wide range of amplitudes for several test cases.

  6. FMM-Yukawa: An adaptive fast multipole method for screened Coulomb interactions

    NASA Astrophysics Data System (ADS)

    Huang, Jingfang; Jia, Jun; Zhang, Bo

    2009-11-01

    A Fortran program package is introduced for the rapid evaluation of the screened Coulomb interactions of N particles in three dimensions. The method utilizes an adaptive oct-tree structure, and is based on the new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related packages are also available at http://www.fastmultipole.org/. This paper is a brief review of the program and its performance. Catalogue identifier: AEEQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 12 385 No. of bytes in distributed program, including test data, etc.: 79 222 Distribution format: tar.gz Programming language: Fortran77 and Fortran90 Computer: Any Operating system: Any RAM: Depends on the number of particles, their distribution, and the adaptive tree structure Classification: 4.8, 4.12 Nature of problem: To evaluate the screened Coulomb potential and force field of N charged particles, and to evaluate a convolution type integral where the Green's function is the fundamental solution of the modified Helmholtz equation. Solution method: An adaptive oct-tree is generated, and a new version of fast multipole method is applied in which the "multipole-to-local" translation operator is diagonalized. Restrictions: Only three and six significant digits accuracy options are provided in this version. Unusual features: Most of the codes are written in Fortran77. Functions for memory allocation from Fortran90 and above are used in one subroutine. Additional comments: For supplementary information see http://www.fastmultipole.org/ Running time: The running time varies depending on the number of particles (denoted by N) in the system and their distribution. The running time scales linearly as a function of N for nearly uniform particle distributions. For three digits accuracy, the solver breaks even with direct summation method at about N = 750. References: [1] L. Greengard, J. Huang, A new version of the fast multipole method for screened Coulomb interactions in three dimensions, J. Comput. Phys. 180 (2002) 642-658.

  7. Parametric Loop Division for 3D Localization in Wireless Sensor Networks

    PubMed Central

    Ahmad, Tanveer

    2017-01-01

    Localization in Wireless Sensor Networks (WSNs) has been an active topic for more than two decades. A variety of algorithms were proposed to improve the localization accuracy. However, they are either limited to two-dimensional (2D) space, or require specific sensor deployment for proper operations. In this paper, we proposed a three-dimensional (3D) localization scheme for WSNs based on the well-known parametric Loop division (PLD) algorithm. The proposed scheme localizes a sensor node in a region bounded by a network of anchor nodes. By iteratively shrinking that region towards its center point, the proposed scheme provides better localization accuracy as compared to existing schemes. Furthermore, it is cost-effective and independent of environmental irregularity. We provide an analytical framework for the proposed scheme and find its lower bound accuracy. Simulation results shows that the proposed algorithm provides an average localization accuracy of 0.89 m with a standard deviation of 1.2 m. PMID:28737714

  8. Halo abundance matching: accuracy and conditions for numerical convergence

    NASA Astrophysics Data System (ADS)

    Klypin, Anatoly; Prada, Francisco; Yepes, Gustavo; Heß, Steffen; Gottlöber, Stefan

    2015-03-01

    Accurate predictions of the abundance and clustering of dark matter haloes play a key role in testing the standard cosmological model. Here, we investigate the accuracy of one of the leading methods of connecting the simulated dark matter haloes with observed galaxies- the halo abundance matching (HAM) technique. We show how to choose the optimal values of the mass and force resolution in large volume N-body simulations so that they provide accurate estimates for correlation functions and circular velocities for haloes and their subhaloes - crucial ingredients of the HAM method. At the 10 per cent accuracy, results converge for ˜50 particles for haloes and ˜150 particles for progenitors of subhaloes. In order to achieve this level of accuracy a number of conditions should be satisfied. The force resolution for the smallest resolved (sub)haloes should be in the range (0.1-0.3)rs, where rs is the scale radius of (sub)haloes. The number of particles for progenitors of subhaloes should be ˜150. We also demonstrate that the two-body scattering plays a minor role for the accuracy of N-body simulations thanks to the relatively small number of crossing-times of dark matter in haloes, and the limited force resolution of cosmological simulations.

  9. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    PubMed

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  10. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    PubMed Central

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-01-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060

  11. The application of the large particles method of numerical modeling of the process of carbonic nanostructures synthesis in plasma

    NASA Astrophysics Data System (ADS)

    Abramov, G. V.; Gavrilov, A. N.

    2018-03-01

    The article deals with the numerical solution of the mathematical model of the particles motion and interaction in multicomponent plasma by the example of electric arc synthesis of carbon nanostructures. The high order of the particles and the number of their interactions requires a significant input of machine resources and time for calculations. Application of the large particles method makes it possible to reduce the amount of computation and the requirements for hardware resources without affecting the accuracy of numerical calculations. The use of technology of GPGPU parallel computing using the Nvidia CUDA technology allows organizing all General purpose computation on the basis of the graphical processor graphics card. The comparative analysis of different approaches to parallelization of computations to speed up calculations with the choice of the algorithm in which to calculate the accuracy of the solution shared memory is used. Numerical study of the influence of particles density in the macro particle on the motion parameters and the total number of particle collisions in the plasma for different modes of synthesis has been carried out. The rational range of the coherence coefficient of particle in the macro particle is computed.

  12. Multiparticle systems in κ -Poincaré inspired by (2 +1 )D gravity

    NASA Astrophysics Data System (ADS)

    Kowalski-Glikman, Jerzy; Rosati, Giacomo

    2015-04-01

    Inspired by a Chern-Simons description of 2 +1 -dimensional gravity coupled to point particles we propose a new Lagrangian of a multiparticle system living in κ -Minkowski/κ -Poincaré spacetime. We derive the dynamics of interacting particles with κ -momentum space, alternative to the one proposed in the "principle of relative locality" literature. The model that we obtain takes account of the nonlocal topological interactions between the particles, so that the effective multiparticle action is not a sum of their free actions. In this construction the locality of particle processes is naturally implemented, even for distant observers. In particular a particle process is characterized by a local deformed energy-momentum conservation law. The spacetime transformations are generated by total charges/generators for the composite particle system, and leave unaffected the locality of individual particle processes.

  13. Simulation and performance analysis of a novel high-accuracy sheathless microfluidic impedance cytometer with coplanar electrode layout.

    PubMed

    Caselli, Federica; Bisegna, Paolo

    2017-10-01

    The performance of a novel microfluidic impedance cytometer (MIC) with coplanar configuration is investigated in silico. The main feature of the device is the ability to provide accurate particle-sizing despite the well-known measurement sensitivity to particle trajectory. The working principle of the device is presented and validated by means of an original virtual laboratory providing close-to-experimental synthetic data streams. It is shown that a metric correlating with particle trajectory can be extracted from the signal traces and used to compensate the trajectory-induced error in the estimated particle size, thus reaching high-accuracy. An analysis of relevant parameters of the experimental setup is also presented. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Extinction spectra of suspensions of microspheres: determination of the spectral refractive index and particle size distribution with nanometer accuracy.

    PubMed

    Gienger, Jonas; Bär, Markus; Neukammer, Jörg

    2018-01-10

    A method is presented to infer simultaneously the wavelength-dependent real refractive index (RI) of the material of microspheres and their size distribution from extinction measurements of particle suspensions. To derive the averaged spectral optical extinction cross section of the microspheres from such ensemble measurements, we determined the particle concentration by flow cytometry to an accuracy of typically 2% and adjusted the particle concentration to ensure that perturbations due to multiple scattering are negligible. For analysis of the extinction spectra, we employ Mie theory, a series-expansion representation of the refractive index and nonlinear numerical optimization. In contrast to other approaches, our method offers the advantage to simultaneously determine size, size distribution, and spectral refractive index of ensembles of microparticles including uncertainty estimation.

  15. A hybrid localization technique for patient tracking.

    PubMed

    Rodionov, Denis; Kolev, George; Bushminkin, Kirill

    2013-01-01

    Nowadays numerous technologies are employed for tracking patients and assets in hospitals or nursing homes. Each of them has advantages and drawbacks. For example, WiFi localization has relatively good accuracy but cannot be used in case of power outage or in the areas with poor WiFi coverage. Magnetometer positioning or cellular network does not have such problems but they are not as accurate as localization with WiFi. This paper describes technique that simultaneously employs different localization technologies for enhancing stability and average accuracy of localization. The proposed algorithm is based on fingerprinting method paired with data fusion and prediction algorithms for estimating the object location. The core idea of the algorithm is technology fusion using error estimation methods. For testing accuracy and performance of the algorithm testing simulation environment has been implemented. Significant accuracy improvement was showed in practical scenarios.

  16. Three-dimensional single-molecule localization with nanometer accuracy using Metal-Induced Energy Transfer (MIET) imaging

    NASA Astrophysics Data System (ADS)

    Karedla, Narain; Chizhik, Anna M.; Stein, Simon C.; Ruhlandt, Daja; Gregor, Ingo; Chizhik, Alexey I.; Enderlein, Jörg

    2018-05-01

    Our paper presents the first theoretical and experimental study using single-molecule Metal-Induced Energy Transfer (smMIET) for localizing single fluorescent molecules in three dimensions. Metal-Induced Energy Transfer describes the resonant energy transfer from the excited state of a fluorescent emitter to surface plasmons in a metal nanostructure. This energy transfer is strongly distance-dependent and can be used to localize an emitter along one dimension. We have used Metal-Induced Energy Transfer in the past for localizing fluorescent emitters with nanometer accuracy along the optical axis of a microscope. The combination of smMIET with single-molecule localization based super-resolution microscopy that provides nanometer lateral localization accuracy offers the prospect of achieving isotropic nanometer localization accuracy in all three spatial dimensions. We give a thorough theoretical explanation and analysis of smMIET, describe its experimental requirements, also in its combination with lateral single-molecule localization techniques, and present first proof-of-principle experiments using dye molecules immobilized on top of a silica spacer, and of dye molecules embedded in thin polymer films.

  17. Results and perspectives of particle transport measurements in gases in microgravity

    NASA Astrophysics Data System (ADS)

    Vedernikov, Andrei; Balapanov, Daniyar; Beresnev, Sergey

    2016-07-01

    Solid or liquid particles floating in a gas belong to dispersed systems, most often referred to as aerosols or dust clouds. They are widely spread in nature, involving both environmental and technological issues. They attract growing attention in microgravity, particularly in complex plasma, simulation of protoplanetary dust clouds, atmospheric aerosol, etc. Brownian random walk, motion of particles in gravity, electrostatic and magnetic fields, are well defined. We present the survey showing that the quantitative description of a vast variety of other types of motion is much less accurate, often known only in a limited region of parameters, sometimes described by the contradictory models, poorly verified experimentally. It is true even for the most extensively investigated transport phenomena - thermophoresis and photophoresis, not to say about diffusiophoresis, gravito-photophoresis, various other types of particle motion driven by physicochemical transformation and accommodation peculiarities on the particle-gas interface, combination of different processes. The number of publications grow very quickly, only those dealing with thermophoresis exceeded 300 in 2015. Hence, there is a strong need in high quality experimental data on particle transport properties with growing interest to expand the scope for non-isometric particles, agglomerates, dense clouds, interrelation with the two-phase flow dynamics. In most cases, the accuracy and sometimes the entire possibility of the measurement is limited by the presence of gravity. Floating particles have the density considerably different from that of the gas. They sediment, often with gliding and tumbling, that perturbs the motion trajectory, local hydrodynamic environment around particles, all together complicating definition of the response. Measurements at very high or very low Knudsen numbers (rarefied gas or too big particles) are of particular difficulty. Experiments assume creating a well-defined force, i.e. certain potential gradient. Most often, it results in the gas density non-uniformity and thus in perturbations from gravitational convection on the Earth. The advantages of microgravity in measurements of kinetic properties are well admitted since long ago. There are quite many experiments on this subject, well presented and referenced by the scientific community, however, sporadic and statistically not sufficiently worked out. It is timely and there is all the necessary components to getting crucial experimental data using microgravity, especially short duration drop tower flights. Of particular interest is the concurrent use of different set-up, their miniaturisation, combination of calibrated powders, coatings, introducing tracers, extreme particle thermal conductivities and gas accommodation coefficients that broaden the range of the parameters by several decimal orders of magnitude. This will provide the crucial accuracy and reliability to get reference data, to judge existing experimental results and to make the choice among controversial theoretical models. In the coming years, we anticipate the genuine break-through in high-quality particle transport measurements resulting in substantial advancement in aerosol microphysics and rarefied gas dynamics. ESA PRODEX program, the Belgian Federal Science Policy Office, ZARM Drop Tower Operation and Service Company Ltd. are greatly acknowledged.

  18. Resolving occlusion and segmentation errors in multiple video object tracking

    NASA Astrophysics Data System (ADS)

    Cheng, Hsu-Yung; Hwang, Jenq-Neng

    2009-02-01

    In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.

  19. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  20. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  1. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Zheng, E-mail: 19994035@sina.com; Wang, Jun; Zhou, Bihua

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented tomore » tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.« less

  2. Manipulating particles for micro- and nano-fluidics via floating electrodes and diffusiophoresis

    NASA Astrophysics Data System (ADS)

    Yalcin, Sinan Eren

    The ability to accurately control micro- and nano-particles in a liquid is fundamentally useful for many applications in biology, medicine, pharmacology, tissue engineering, and microelectronics. Therefore, first particle manipulations are experimentally studied using electrodes attached to the bottom of a straight microchannel under an imposed DC or AC electric field. In contrast to a dielectric microchannel possessing a nearly-uniform surface charge, a floating electrode is polarized under the imposed electric field. The purpose is to create a non-uniform distribution of the induced surface charge, with a zero-net-surface charge along the floating electrode's surface. Such a field, in turn, generates an induced-charge electro-osmotic (ICED) flow near the metal strip. The demonstrations by using single and multiple floating electrodes at the bottom of a straight microchannel, with induced DC electric field, include particle enrichment, movement, trapping, reversal of motion, separation, and particle focusing. A flexible strategy for the on-demand control of the particle enrichment and positioning is also proposed and demonstrated by using a locally-controlled floating metal electrode. Then, under an externally imposed AC electric field, the particle deposition onto a floating electrode, which is placed in a closed circular cavity, has been experimentally investigated. In the second part of the study, another particle manipulation method was computationally investigated. The diffusiophoretic and electrodiffusiophoretic motion of a charged spherical particle in a nanopore is subjected to an axial electrolyte concentration gradient. The charged particle experiences electrophoresis because of the imposed electric field and the diffusiophoresis is caused solely by the imposed concentration gradient. Depending on the magnitude and direction of the imposed concentration gradient, the particle's electrophoretic motion can be accelerated, decelerated, and even reversed in a nanopore by the superimposed diffusiophoresis. Based on the results demonstrated in the present study, it is entirely conceivable to extend the development to design devices for the following objectives: (1) to enrich the concentration of, say, DNA or RNA, and to increase their concentrations at a desired location. (2) to act as a filtration device, wherin the filtration can be achieved without blocking the microfluidic channel and without any porous material. (3) to act as a microfluidic valve, where the particles can be locally trapped in any desired location and the direction can be switched as desired. (4) to create nanocomposite material formation or even a thin nanocomposite film formation on the floating electrode. (5) to create a continuous concentration-gradient-generator nanofluidic device that may be obtained for nanoparticle translocation process. This may achieve nanometer-scale spatial accuracy sample sequencing by simultaneously controlling the electric field and concentration gradient.

  3. Development of hardware accelerator for molecular dynamics simulations: a computation board that calculates nonbonded interactions in cooperation with fast multipole method.

    PubMed

    Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro

    2003-04-15

    Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003

  4. A three-dimensional strain measurement method in elastic transparent materials using tomographic particle image velocimetry

    PubMed Central

    Suzuki, Sara; Aoyama, Yusuke; Umezu, Mitsuo

    2017-01-01

    Background The mechanical interaction between blood vessels and medical devices can induce strains in these vessels. Measuring and understanding these strains is necessary to identify the causes of vascular complications. This study develops a method to measure the three-dimensional (3D) distribution of strain using tomographic particle image velocimetry (Tomo-PIV) and compares the measurement accuracy with the gauge strain in tensile tests. Methods and findings The test system for measuring 3D strain distribution consists of two cameras, a laser, a universal testing machine, an acrylic chamber with a glycerol water solution for adjusting the refractive index with the silicone, and dumbbell-shaped specimens mixed with fluorescent tracer particles. 3D images of the particles were reconstructed from 2D images using a multiplicative algebraic reconstruction technique (MART) and motion tracking enhancement. Distributions of the 3D displacements were calculated using a digital volume correlation. To evaluate the accuracy of the measurement method in terms of particle density and interrogation voxel size, the gauge strain and one of the two cameras for Tomo-PIV were used as a video-extensometer in the tensile test. The results show that the optimal particle density and interrogation voxel size are 0.014 particles per pixel and 40 × 40 × 40 voxels with a 75% overlap. The maximum measurement error was maintained at less than 2.5% in the 4-mm-wide region of the specimen. Conclusions We successfully developed a method to experimentally measure 3D strain distribution in an elastic silicone material using Tomo-PIV and fluorescent particles. To the best of our knowledge, this is the first report that applies Tomo-PIV to investigate 3D strain measurements in elastic materials with large deformation and validates the measurement accuracy. PMID:28910397

  5. Adaptive Local Realignment of Protein Sequences.

    PubMed

    DeBlasio, Dan; Kececioglu, John

    2018-06-11

    While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.

  6. ParticleCall: A particle filter for base calling in next-generation sequencing systems

    PubMed Central

    2012-01-01

    Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067

  7. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  8. A fully-automated multiscale kernel graph cuts based particle localization scheme for temporal focusing two-photon microscopy

    NASA Astrophysics Data System (ADS)

    Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei

    2017-03-01

    The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.

  9. Nanometer-scale sizing accuracy of particle suspensions on an unmodified cell phone using elastic light scattering.

    PubMed

    Smith, Zachary J; Chu, Kaiqin; Wachsmann-Hogiu, Sebastian

    2012-01-01

    We report on the construction of a Fourier plane imaging system attached to a cell phone. By illuminating particle suspensions with a collimated beam from an inexpensive diode laser, angularly resolved scattering patterns are imaged by the phone's camera. Analyzing these patterns with Mie theory results in predictions of size distributions of the particles in suspension. Despite using consumer grade electronics, we extracted size distributions of sphere suspensions with better than 20 nm accuracy in determining the mean size. We also show results from milk, yeast, and blood cells. Performing these measurements on a portable device presents opportunities for field-testing of food quality, process monitoring, and medical diagnosis.

  10. UmUTracker: A versatile MATLAB program for automated particle tracking of 2D light microscopy or 3D digital holography data

    NASA Astrophysics Data System (ADS)

    Zhang, Hanqing; Stangner, Tim; Wiklund, Krister; Rodriguez, Alvaro; Andersson, Magnus

    2017-10-01

    We present a versatile and fast MATLAB program (UmUTracker) that automatically detects and tracks particles by analyzing video sequences acquired by either light microscopy or digital in-line holographic microscopy. Our program detects the 2D lateral positions of particles with an algorithm based on the isosceles triangle transform, and reconstructs their 3D axial positions by a fast implementation of the Rayleigh-Sommerfeld model using a radial intensity profile. To validate the accuracy and performance of our program, we first track the 2D position of polystyrene particles using bright field and digital holographic microscopy. Second, we determine the 3D particle position by analyzing synthetic and experimentally acquired holograms. Finally, to highlight the full program features, we profile the microfluidic flow in a 100 μm high flow chamber. This result agrees with computational fluid dynamic simulations. On a regular desktop computer UmUTracker can detect, analyze, and track multiple particles at 5 frames per second for a template size of 201 ×201 in a 1024 × 1024 image. To enhance usability and to make it easy to implement new functions we used object-oriented programming. UmUTracker is suitable for studies related to: particle dynamics, cell localization, colloids and microfluidic flow measurement. Program Files doi : http://dx.doi.org/10.17632/fkprs4s6xp.1 Licensing provisions : Creative Commons by 4.0 (CC by 4.0) Programming language : MATLAB Nature of problem: 3D multi-particle tracking is a common technique in physics, chemistry and biology. However, in terms of accuracy, reliable particle tracking is a challenging task since results depend on sample illumination, particle overlap, motion blur and noise from recording sensors. Additionally, the computational performance is also an issue if, for example, a computationally expensive process is executed, such as axial particle position reconstruction from digital holographic microscopy data. Versatile robust tracking programs handling these concerns and providing a powerful post-processing option are significantly limited. Solution method: UmUTracker is a multi-functional tool to extract particle positions from long video sequences acquired with either light microscopy or digital holographic microscopy. The program provides an easy-to-use graphical user interface (GUI) for both tracking and post-processing that does not require any programming skills to analyze data from particle tracking experiments. UmUTracker first conduct automatic 2D particle detection even under noisy conditions using a novel circle detector based on the isosceles triangle sampling technique with a multi-scale strategy. To reduce the computational load for 3D tracking, it uses an efficient implementation of the Rayleigh-Sommerfeld light propagation model. To analyze and visualize the data, an efficient data analysis step, which can for example show 4D flow visualization using 3D trajectories, is included. Additionally, UmUTracker is easy to modify with user-customized modules due to the object-oriented programming style Additional comments: Program obtainable from https://sourceforge.net/projects/umutracker/

  11. Pairwise-interaction extended point-particle model for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Akiki, G.; Moore, W. C.; Balachandar, S.

    2017-12-01

    In this work we consider the pairwise interaction extended point-particle (PIEP) model for Euler-Lagrange simulations of particle-laden flows. By accounting for the precise location of neighbors the PIEP model goes beyond local particle volume fraction, and distinguishes the influence of upstream, downstream and laterally located neighbors. The two main ingredients of the PIEP model are (i) the undisturbed flow at any particle is evaluated as a superposition of the macroscale flow and a microscale flow that is approximated as a pairwise superposition of perturbation fields induced by each of the neighboring particles, and (ii) the forces and torque on the particle are then calculated from the undisturbed flow using the Faxén form of the force relation. The computational efficiency of the standard Euler-Lagrange approach is retained, since the microscale perturbation fields induced by a neighbor are pre-computed and stored as PIEP maps. Here we extend the PIEP force model of Akiki et al. [3] with a corresponding torque model to systematically include the effect of perturbation fields induced by the neighbors in evaluating the net torque. Also, we use DNS results from a uniform flow over two stationary spheres to further improve the PIEP force and torque models. We then test the PIEP model in three different sedimentation problems and compare the results against corresponding DNS to assess the accuracy of the PIEP model and improvement over the standard point-particle approach. In the case of two sedimenting spheres in a quiescent ambient the PIEP model is shown to capture the drafting-kissing-tumbling process. In cases of 5 and 80 sedimenting spheres a good agreement is obtained between the PIEP simulation and the DNS. For all three simulations, the DEM-PIEP was able to recreate, to a good extent, the results from the DNS, while requiring only a negligible fraction of the numerical resources required by the fully-resolved DNS.

  12. Dynamical Friedel oscillations of a Fermi sea

    NASA Astrophysics Data System (ADS)

    Zhang, J. M.; Liu, Y.

    2018-02-01

    We study the scenario of quenching an interaction-free Fermi sea on a one-dimensional lattice ring by suddenly changing the potential of a site. From the point-of-view of the conventional Friedel oscillation, which is a static or equilibrium problem, it is of interest what temporal and spatial oscillations the local sudden quench will induce. Numerically, the primary observation is that for a generic site, the local particle density switches between two plateaus periodically in time. Making use of the proximity of the realistic model to an exactly solvable model and employing the Abel regularization to assign a definite value to a divergent series, we obtain an analytical formula for the heights of the plateaus, which turns out to be very accurate for sites not too close to the quench site. The unexpect relevance and the incredible accuracy of the Abel regularization are yet to be understood. Eventually, when the contribution of the defect mode is also taken into account, the plateaus for those sites close to or on the quench site can also be accurately predicted. We have also studied the infinite lattice case. In this case, ensuing the quench, the out-going wave fronts leave behind a stable density oscillation pattern. Because of some interesting single-particle property, this dynamically generated Friedel oscillation differs from its conventional static counterpart only by the defect mode.

  13. Implementation of Chaotic Gaussian Particle Swarm Optimization for Optimize Learning-to-Rank Software Defect Prediction Model Construction

    NASA Astrophysics Data System (ADS)

    Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.

    2018-03-01

    Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.

  14. Jamming criticality revealed by removing localized buckling excitations.

    PubMed

    Charbonneau, Patrick; Corwin, Eric I; Parisi, Giorgio; Zamponi, Francesco

    2015-03-27

    Recent theoretical advances offer an exact, first-principles theory of jamming criticality in infinite dimension as well as universal scaling relations between critical exponents in all dimensions. For packings of frictionless spheres near the jamming transition, these advances predict that nontrivial power-law exponents characterize the critical distribution of (i) small interparticle gaps and (ii) weak contact forces, both of which are crucial for mechanical stability. The scaling of the interparticle gaps is known to be constant in all spatial dimensions d-including the physically relevant d=2 and 3, but the value of the weak force exponent remains the object of debate and confusion. Here, we resolve this ambiguity by numerical simulations. We construct isostatic jammed packings with extremely high accuracy, and introduce a simple criterion to separate the contribution of particles that give rise to localized buckling excitations, i.e., bucklers, from the others. This analysis reveals the remarkable dimensional robustness of mean-field marginality and its associated criticality.

  15. Reputation-Based Secure Sensor Localization in Wireless Sensor Networks

    PubMed Central

    He, Jingsha; Xu, Jing; Zhu, Xingye; Zhang, Yuqiang; Zhang, Ting; Fu, Wanqing

    2014-01-01

    Location information of sensor nodes in wireless sensor networks (WSNs) is very important, for it makes information that is collected and reported by the sensor nodes spatially meaningful for applications. Since most current sensor localization schemes rely on location information that is provided by beacon nodes for the regular sensor nodes to locate themselves, the accuracy of localization depends on the accuracy of location information from the beacon nodes. Therefore, the security and reliability of the beacon nodes become critical in the localization of regular sensor nodes. In this paper, we propose a reputation-based security scheme for sensor localization to improve the security and the accuracy of sensor localization in hostile or untrusted environments. In our proposed scheme, the reputation of each beacon node is evaluated based on a reputation evaluation model so that regular sensor nodes can get credible location information from highly reputable beacon nodes to accomplish localization. We also perform a set of simulation experiments to demonstrate the effectiveness of the proposed reputation-based security scheme. And our simulation results show that the proposed security scheme can enhance the security and, hence, improve the accuracy of sensor localization in hostile or untrusted environments. PMID:24982940

  16. On the Quantification of Cellular Velocity Fields.

    PubMed

    Vig, Dhruv K; Hamby, Alex E; Wolgemuth, Charles W

    2016-04-12

    The application of flow visualization in biological systems is becoming increasingly common in studies ranging from intracellular transport to the movements of whole organisms. In cell biology, the standard method for measuring cell-scale flows and/or displacements has been particle image velocimetry (PIV); however, alternative methods exist, such as optical flow constraint. Here we review PIV and optical flow, focusing on the accuracy and efficiency of these methods in the context of cellular biophysics. Although optical flow is not as common, a relatively simple implementation of this method can outperform PIV and is easily augmented to extract additional biophysical/chemical information such as local vorticity or net polymerization rates from speckle microscopy. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. Auditory and visual localization accuracy in young children and adults.

    PubMed

    Martin, Karen; Johnstone, Patti; Hedrick, Mark

    2015-06-01

    This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Anisotropic particles strengthen granular pillars under compression

    NASA Astrophysics Data System (ADS)

    Harrington, Matt; Durian, Douglas J.

    2018-01-01

    We probe the effects of particle shape on the global and local behavior of a two-dimensional granular pillar, acting as a proxy for a disordered solid, under uniaxial compression. This geometry allows for direct measurement of global material response, as well as tracking of all individual particle trajectories. In general, drawing connections between local structure and local dynamics can be challenging in amorphous materials due to lower precision of atomic positions, so this study aims to elucidate such connections. We vary local interactions by using three different particle shapes: discrete circular grains (monomers), pairs of grains bonded together (dimers), and groups of three bonded in a triangle (trimers). We find that dimers substantially strengthen the pillar and the degree of this effect is determined by orientational order in the initial condition. In addition, while the three particle shapes form void regions at distinct rates, we find that anisotropies in the local amorphous structure remain robust through the definition of a metric that quantifies packing anisotropy. Finally, we highlight connections between local deformation rates and local structure.

  19. Comprehensive model for predicting elemental composition of coal pyrolysis products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricahrds, Andrew P.; Shutt, Tim; Fletcher, Thomas H.

    Large-scale coal combustion simulations depend highly on the accuracy and utility of the physical submodels used to describe the various physical behaviors of the system. Coal combustion simulations depend on the particle physics to predict product compositions, temperatures, energy outputs, and other useful information. The focus of this paper is to improve the accuracy of devolatilization submodels, to be used in conjunction with other particle physics models. Many large simulations today rely on inaccurate assumptions about particle compositions, including that the volatiles that are released during pyrolysis are of the same elemental composition as the char particle. Another common assumptionmore » is that the char particle can be approximated by pure carbon. These assumptions will lead to inaccuracies in the overall simulation. There are many factors that influence pyrolysis product composition, including parent coal composition, pyrolysis conditions (including particle temperature history and heating rate), and others. All of these factors are incorporated into the correlations to predict the elemental composition of the major pyrolysis products, including coal tar, char, and light gases.« less

  20. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  1. Uncertainty characterization of particle location from refocused plenoptic images.

    PubMed

    Hall, Elise M; Guildenbecher, Daniel R; Thurow, Brian S

    2017-09-04

    Plenoptic imaging is a 3D imaging technique that has been applied for quantification of 3D particle locations and sizes. This work experimentally evaluates the accuracy and precision of such measurements by investigating a static particle field translated to known displacements. Measured 3D displacement values are determined from sharpness metrics applied to volumetric representations of the particle field created using refocused plenoptic images, corrected using a recently developed calibration technique. Comparison of measured and known displacements for many thousands of particles allows for evaluation of measurement uncertainty. Mean displacement error, as a measure of accuracy, is shown to agree with predicted spatial resolution over the entire measurement domain, indicating robustness of the calibration methods. On the other hand, variation in the error, as a measure of precision, fluctuates as a function of particle depth in the optical direction. Error shows the smallest variation within the predicted depth of field of the plenoptic camera, with a gradual increase outside this range. The quantitative uncertainty values provided here can guide future measurement optimization and will serve as useful metrics for design of improved processing algorithms.

  2. [Research on the measurement range of particle size with total light scattering method in vis-IR region].

    PubMed

    Sun, Xiao-gang; Tang, Hong; Dai, Jing-min

    2008-12-01

    The problem of determining the particle size range in the visible-infrared region was studied using the independent model algorithm in the total scattering technique. By the analysis and comparison of the accuracy of the inversion results for different R-R distributions, the measurement range of particle size was determined. Meanwhile, the corrected extinction coefficient was used instead of the original extinction coefficient, which could determine the measurement range of particle size with higher accuracy. Simulation experiments illustrate that the particle size distribution can be retrieved very well in the range from 0. 05 to 18 microm at relative refractive index m=1.235 in the visible-infrared spectral region, and the measurement range of particle size will vary with the varied wavelength range and relative refractive index. It is feasible to use the constrained least squares inversion method in the independent model to overcome the influence of the measurement error, and the inverse results are all still satisfactory when 1% stochastic noise is added to the value of the light extinction.

  3. Comparison of Satellite Observations of Aerosol Optical Depth to Surface Monitor Fine Particle Concentration

    NASA Technical Reports Server (NTRS)

    Kleb, Mary M.; AlSaadi, Jassim A.; Neil, Doreen O.; Pierce, Robert B.; Pippin, Margartet R.; Roell, Marilee M.; Kittaka, Chieko; Szykman, James J.

    2004-01-01

    Under NASA's Earth Science Applications Program, the Infusing satellite Data into Environmental Applications (IDEA) project examined the relationship between satellite observations and surface monitors of air pollutants to facilitate a more capable and integrated observing network. This report provides a comparison of satellite aerosol optical depth to surface monitor fine particle concentration observations for the month of September 2003 at more than 300 individual locations in the continental US. During September 2003, IDEA provided prototype, near real-time data-fusion products to the Environmental Protection Agency (EPA) directed toward improving the accuracy of EPA s next-day Air Quality Index (AQI) forecasts. Researchers from NASA Langley Research Center and EPA used data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument combined with EPA ground network data to create a NASA-data-enhanced Forecast Tool. Air quality forecasters used this tool to prepare their forecasts of particle pollution, or particulate matter less than 2.5 microns in diameter (PM2.5), for the next-day AQI. The archived data provide a rich resource for further studies and analysis. The IDEA project uses data sets and models developed for tropospheric chemistry research to assist federal, state, and local agencies in making decisions concerning air quality management to protect public health.

  4. Hydrodynamic simulation and particle-tracking techniques for identification of source areas to public-water intakes on the St. Clair-Detroit river waterway in the Great Lakes Basin

    USGS Publications Warehouse

    Holtschlag, David J.; Koschik, John A.

    2004-01-01

    Source areas to public water intakes on the St. Clair-Detroit River Waterway were identified by use of hydrodynamic simulation and particle-tracking analyses to help protect public supplies from contaminant spills and discharges. This report describes techniques used to identify these areas and illustrates typical results using selected points on St. Clair River and Lake St. Clair. Parameterization of an existing two-dimensional hydrodynamic model (RMA2) of the St. Clair-Detroit River Waterway was enhanced to improve estimation of local flow velocities. Improvements in simulation accuracy were achieved by computing channel roughness coefficients as a function of flow depth, and determining eddy viscosity coefficients on the basis of velocity data. The enhanced parameterization was combined with refinements in the model mesh near 13 public water intakes on the St. Clair-Detroit River Waterway to improve the resolution of flow velocities while maintaining consistency with flow and water-level data. Scenarios representing a range of likely flow and wind conditions were developed for hydrodynamic simulation. Particle-tracking analyses combined advective movements described by hydrodynamic scenarios with random components associated with sub-grid-scale movement and turbulent mixing to identify source areas to public water intakes.

  5. Particle-based membrane model for mesoscopic simulation of cellular dynamics

    NASA Astrophysics Data System (ADS)

    Sadeghi, Mohsen; Weikl, Thomas R.; Noé, Frank

    2018-01-01

    We present a simple and computationally efficient coarse-grained and solvent-free model for simulating lipid bilayer membranes. In order to be used in concert with particle-based reaction-diffusion simulations, the model is purely based on interacting and reacting particles, each representing a coarse patch of a lipid monolayer. Particle interactions include nearest-neighbor bond-stretching and angle-bending and are parameterized so as to reproduce the local membrane mechanics given by the Helfrich energy density over a range of relevant curvatures. In-plane fluidity is implemented with Monte Carlo bond-flipping moves. The physical accuracy of the model is verified by five tests: (i) Power spectrum analysis of equilibrium thermal undulations is used to verify that the particle-based representation correctly captures the dynamics predicted by the continuum model of fluid membranes. (ii) It is verified that the input bending stiffness, against which the potential parameters are optimized, is accurately recovered. (iii) Isothermal area compressibility modulus of the membrane is calculated and is shown to be tunable to reproduce available values for different lipid bilayers, independent of the bending rigidity. (iv) Simulation of two-dimensional shear flow under a gravity force is employed to measure the effective in-plane viscosity of the membrane model and show the possibility of modeling membranes with specified viscosities. (v) Interaction of the bilayer membrane with a spherical nanoparticle is modeled as a test case for large membrane deformations and budding involved in cellular processes such as endocytosis. The results are shown to coincide well with the predicted behavior of continuum models, and the membrane model successfully mimics the expected budding behavior. We expect our model to be of high practical usability for ultra coarse-grained molecular dynamics or particle-based reaction-diffusion simulations of biological systems.

  6. Advective transport observations with MODPATH-OBS--documentation of the MODPATH observation process

    USGS Publications Warehouse

    Hanson, R.T.; Kauffman, L.K.; Hill, M.C.; Dickinson, J.E.; Mehl, S.W.

    2013-01-01

    The MODPATH-OBS computer program described in this report is designed to calculate simulated equivalents for observations related to advective groundwater transport that can be represented in a quantitative way by using simulated particle-tracking data. The simulated equivalents supported by MODPATH-OBS are (1) distance from a source location at a defined time, or proximity to an observed location; (2) time of travel from an initial location to defined locations, areas, or volumes of the simulated system; (3) concentrations used to simulate groundwater age; and (4) percentages of water derived from contributing source areas. Although particle tracking only simulates the advective component of conservative transport, effects of non-conservative processes such as retardation can be approximated through manipulation of the effective-porosity value used to calculate velocity based on the properties of selected conservative tracers. This program can also account for simple decay or production, but it cannot account for diffusion. Dispersion can be represented through direct simulation of subsurface heterogeneity and the use of many particles. MODPATH-OBS acts as a postprocessor to MODPATH, so that the sequence of model runs generally required is MODFLOW, MODPATH, and MODPATH-OBS. The version of MODFLOW and MODPATH that support the version of MODPATH-OBS presented in this report are MODFLOW-2005 or MODFLOW-LGR, and MODPATH-LGR. MODFLOW-LGR is derived from MODFLOW-2005, MODPATH 5, and MODPATH 6 and supports local grid refinement. MODPATH-LGR is derived from MODPATH 5. It supports the forward and backward tracking of particles through locally refined grids and provides the output needed for MODPATH_OBS. For a single grid and no observations, MODPATH-LGR results are equivalent to MODPATH 5. MODPATH-LGR and MODPATH-OBS simulations can use nearly all of the capabilities of MODFLOW-2005 and MODFLOW-LGR; for example, simulations may be steady-state, transient, or a combination. Though the program name MODPATH-OBS specifically refers to observations, the program also can be used to calculate model prediction of observations. MODPATH-OBS is primarily intended for use with separate programs that conduct sensitivity analysis, data needs assessment, parameter estimation, and uncertainty analysis, such as UCODE_2005, and PEST. In many circumstances, refined grids in selected parts of a model are important to simulated hydraulics, detailed inflows and outflows, or other system characteristics. MODFLOW-LGR and MODPATH-LGR support accurate local grid refinement in which both mass (flows) and energy (head) are conserved across the local grid boundary. MODPATH-OBS is designed to take advantage of these capabilities. For example, particles tracked between a pumping well and a nearby stream, which are simulated poorly if a river and well are located in a single large grid cell, can be simulated with improved accuracy using a locally refined grid in MODFLOW-LGR, MODPATH-LGR, and MODPATH-OBS. The locally-refined-grid approach can provide more accurate simulated equivalents to observed transport between the well and the river. The documentation presented here includes a brief discussion of previous work, description of the methods, and detailed descriptions of the required input files and how the output files are typically used.

  7. A Novel System for Correction of Relative Angular Displacement between Airborne Platform and UAV in Target Localization

    PubMed Central

    Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan

    2017-01-01

    This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%. PMID:28273845

  8. A Novel System for Correction of Relative Angular Displacement between Airborne Platform and UAV in Target Localization.

    PubMed

    Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan

    2017-03-04

    This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%.

  9. High accuracy position response calibration method for a micro-channel plate ion detector

    NASA Astrophysics Data System (ADS)

    Hong, R.; Leredde, A.; Bagdasarova, Y.; Fléchard, X.; García, A.; Müller, P.; Knecht, A.; Liénard, E.; Kossin, M.; Sternberg, M. G.; Swanson, H. E.; Zumwalt, D. W.

    2016-11-01

    We have developed a position response calibration method for a micro-channel plate (MCP) detector with a delay-line anode position readout scheme. Using an in situ calibration mask, an accuracy of 8 μm and a resolution of 85 μm (FWHM) have been achieved for MeV-scale α particles and ions with energies of ∼10 keV. At this level of accuracy, the difference between the MCP position responses to high-energy α particles and low-energy ions is significant. The improved performance of the MCP detector can find applications in many fields of AMO and nuclear physics. In our case, it helps reducing systematic uncertainties in a high-precision nuclear β-decay experiment.

  10. Single-Particle Mobility Edge in a One-Dimensional Quasiperiodic Optical Lattice

    NASA Astrophysics Data System (ADS)

    Lüschen, Henrik P.; Scherg, Sebastian; Kohlert, Thomas; Schreiber, Michael; Bordia, Pranjal; Li, Xiao; Das Sarma, S.; Bloch, Immanuel

    2018-04-01

    A single-particle mobility edge (SPME) marks a critical energy separating extended from localized states in a quantum system. In one-dimensional systems with uncorrelated disorder, a SPME cannot exist, since all single-particle states localize for arbitrarily weak disorder strengths. However, in a quasiperiodic system, the localization transition can occur at a finite detuning strength and SPMEs become possible. In this Letter, we find experimental evidence for the existence of such a SPME in a one-dimensional quasiperiodic optical lattice. Specifically, we find a regime where extended and localized single-particle states coexist, in good agreement with theoretical simulations, which predict a SPME in this regime.

  11. Bell Test experiments explained without entanglement

    NASA Astrophysics Data System (ADS)

    Boyd, Jeffrey

    2011-04-01

    by Jeffrey H. Boyd. Jeffreyhboyd@gmail.com. John Bell proposed a test of what was called "local realism." However that is a different view of reality than we hold. Bell incorrectly assumed the validity of wave particle dualism. According to our model waves are independent of particles; wave interference precedes the emission of a particle. This results in two conclusions. First the proposed inequalities that apply to "local realism" in Bell's theorem do not apply to this model. The alleged mathematics of "local realism" is therefore wrong. Second, we can explain the Bell Test experimental results (such as the experiments done at Innsbruck) without any need for entanglement, non-locality, or particle superposition.

  12. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Xiongbiao, E-mail: xluo@robarts.ca, E-mail: Ying.Wan@student.uts.edu.au; Wan, Ying, E-mail: xluo@robarts.ca, E-mail: Ying.Wan@student.uts.edu.au; He, Xiangjian

    Purpose: Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. Methods: The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) asmore » a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor’s) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. Results: The experimental results demonstrate that the authors’ proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors’ framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. Conclusions: A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm optimization method with using the current observation information and adaptive evolutionary factors. The authors proposed framework greatly reduced the guidance errors from (4.3, 7.8) to (3.0 mm, 5.6°), compared to state-of-the-art methods.« less

  13. Accuracy of RGD approximation for computing light scattering properties of diffusing and motile bacteria. [Rayleigh-Gans-Debye

    NASA Technical Reports Server (NTRS)

    Kottarchyk, M.; Chen, S.-H.; Asano, S.

    1979-01-01

    The study tests the accuracy of the Rayleigh-Gans-Debye (RGD) approximation against a rigorous scattering theory calculation for a simplified model of E. coli (about 1 micron in size) - a solid spheroid. A general procedure is formulated whereby the scattered field amplitude correlation function, for both polarized and depolarized contributions, can be computed for a collection of particles. An explicit formula is presented for the scattered intensity, both polarized and depolarized, for a collection of randomly diffusing or moving particles. Two specific cases for the intermediate scattering functions are considered: diffusing particles and freely moving particles with a Maxwellian speed distribution. The formalism is applied to microorganisms suspended in a liquid medium. Sensitivity studies revealed that for values of the relative index of refraction greater than 1.03, RGD could be in serious error in computing the intensity as well as correlation functions.

  14. Evaluation of flow hydrodynamics in a pilot-scale dissolved air flotation tank: a comparison between CFD and experimental measurements.

    PubMed

    Lakghomi, B; Lawryshyn, Y; Hofmann, R

    2015-01-01

    Computational fluid dynamics (CFD) models of dissolved air flotation (DAF) have shown formation of stratified flow (back and forth horizontal flow layers at the top of the separation zone) and its impact on improved DAF efficiency. However, there has been a lack of experimental validation of CFD predictions, especially in the presence of solid particles. In this work, for the first time, both two-phase (air-water) and three-phase (air-water-solid particles) CFD models were evaluated at pilot scale using measurements of residence time distribution, bubble layer position and bubble-particle contact efficiency. The pilot-scale results confirmed the accuracy of the CFD model for both two-phase and three-phase flows, but showed that the accuracy of the three-phase CFD model would partly depend on the estimation of bubble-particle attachment efficiency.

  15. Evolutionary Algorithms Approach to the Solution of Damage Detection Problems

    NASA Astrophysics Data System (ADS)

    Salazar Pinto, Pedro Yoajim; Begambre, Oscar

    2010-09-01

    In this work is proposed a new Self-Configured Hybrid Algorithm by combining the Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). The aim of the proposed strategy is to increase the stability and accuracy of the search. The central idea is the concept of Guide Particle, this particle (the best PSO global in each generation) transmits its information to a particle of the following PSO generation, which is controlled by the GA. Thus, the proposed hybrid has an elitism feature that improves its performance and guarantees the convergence of the procedure. In different test carried out in benchmark functions, reported in the international literature, a better performance in stability and accuracy was observed; therefore the new algorithm was used to identify damage in a simple supported beam using modal data. Finally, it is worth noting that the algorithm is independent of the initial definition of heuristic parameters.

  16. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  17. A deep convolutional neural network approach to single-particle recognition in cryo-electron microscopy.

    PubMed

    Zhu, Yanan; Ouyang, Qi; Mao, Youdong

    2017-07-21

    Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.

  18. Bell theorem without inequalities for two spinless particles

    NASA Astrophysics Data System (ADS)

    Bernstein, Herbert J.; Greenberger, Daniel M.; Horne, Michael A.; Zeilinger, Anton

    1993-01-01

    We use the Greenberger-Horne-Zeilinger [in Bell's Theorem, Quantum Theory,and Conceptions of the Universe, edited by M. Kafatos (Kluwer Academic, Dordrecht, 1989)] approach to present three demonstrations of the failure of Einstein-Podolsky-Rosen (EPR) [Phys. Rev. 47, 777 (1935)] local realism for the case of two spinless particles in a two-particle interferometer. The original EPR assumptions of locality and reality do not suffice for this. First, we use the EPR assumptions of locality and reality to establish that in a two-particle interferometer, the path taken by each particle is an element of reality. Second, we supplement the EPR premises by the postulate that when the path taken by a particle is an element of reality, all paths not taken are empty. We emphasize that our approach is not applicable to a single-particle interferometer because there the path taken by the particle cannot be established as an element of reality. We point out that there are real conceptual differences between single-particle, two-particle, and multiparticle interferometry.

  19. Diffusion of microspheres in shear flow near a wall: use to measure binding rates between attached molecules.

    PubMed Central

    Pierres, A; Benoliel, A M; Zhu, C; Bongrand, P

    2001-01-01

    The rate and distance-dependence of association between surface-attached molecules may be determined by monitoring the motion of receptor-bearing spheres along ligand-coated surfaces in a flow chamber (Pierres et al., Proc. Natl. Acad. Sci. U.S.A. 95:9256-9261, 1998). Particle arrests reveal bond formation, and the particle-to-surface distance may be estimated from the ratio between the velocity and the wall shear rate. However, several problems are raised. First, data interpretation requires extensive computer simulations. Second, the relevance of standard results from fluid mechanics to micrometer-size particles separated from surfaces by nanometer distances is not fully demonstrated. Third, the wall shear rate must be known with high accuracy. Here we present a simple derivation of an algorithm permitting one to simulate the motion of spheres near a plane in shear flow. We check that theoretical predictions are consistent with the experimental dependence of motion on medium viscosity or particle size, and the requirement for equilibrium particle height distribution to follow Boltzman's law. The determination of the statistical relationship between particle velocity and acceleration allows one to derive the wall shear rate with 1-s(-1) accuracy and the Hamaker constant of interaction between the particle and the wall with a sensitivity better than 10(-21) J. It is demonstrated that the correlation between particle height and mean velocity during a time interval Deltat is maximal when Deltat is about 0.1-0.2 s for a particle of 1.4-microm radius. When the particle-to-surface distance ranges between 10 and 40 nm, the particle height distribution may be obtained with a standard deviation ranging between 8 and 25 nm, provided the average velocity during a 160-ms period of time is determined with 10% accuracy. It is concluded that the flow chamber allows one to detect the formation of individual bonds with a minimal lifetime of 40 ms in presence of a disruptive force of approximately 5 pN and to assess the distance dependence within the tens of nanometer range. PMID:11423392

  20. Determining Number Concentrations and Diameters of Polystyrene Particles by Measuring the Effective Refractive Index of Colloids Using Surface Plasmon Resonance.

    PubMed

    Tuoriniemi, Jani; Moreira, Beatriz; Safina, Gulnara

    2016-10-04

    The capabilities of surface plasmon resonance (SPR) for characterization of colloidal particles were evaluated for 100, 300, and 460 nm nominal diameter polystyrene (PS) latexes. First the accuracy of measuring the effective refractive index (n eff ) of turbid colloids using SPR was quantified. It was concluded that for submicrometer sized PS particles the accuracy is limited by the reproducibility between replicate injections of samples. An SPR method was developed for obtaining the particle mean diameter (d part ) and the particle number concentration (c p ) by fitting the measured n eff of polystyrene (PS) colloids diluted in series with theoretical values calculated using the coherent scattering theory (CST). The d part and c p determined using SPR agreed with reference values obtained from size distributions measured by scanning electron microscopy (SEM), and the mass concentrations stated by the manufacturer. The 100 nm particles adsorbed on the sensing surface, which hampered the analysis. Once the adsorption problem has been overcome, the developed SPR method has potential to become a versatile tool for characterization of colloidal particles. In particular, SPR could form the basis of rapid and accurate methods for measuring the c p of submicrometer particles in dispersion.

  1. We Can Have It All: Improved Surveillance Outcomes and Decreased Personnel Costs Associated With Electronic Reportable Disease Surveillance, North Carolina, 2010

    PubMed Central

    DiBiase, Lauren; Fangman, Mary T.; Fleischauer, Aaron T.; Waller, Anna E.; MacDonald, Pia D. M.

    2013-01-01

    Objectives. We assessed the timeliness, accuracy, and cost of a new electronic disease surveillance system at the local health department level. We describe practices associated with lower cost and better surveillance timeliness and accuracy. Methods. Interviews conducted May through August 2010 with local health department (LHD) staff at a simple random sample of 30 of 100 North Carolina counties provided information on surveillance practices and costs; we used surveillance system data to calculate timeliness and accuracy. We identified LHDs with best timeliness and accuracy and used these categories to compare surveillance practices and costs. Results. Local health departments in the top tertiles for surveillance timeliness and accuracy had a lower cost per case reported than LHDs with lower timeliness and accuracy ($71 and $124 per case reported, respectively; P = .03). Best surveillance practices fell into 2 domains: efficient use of the electronic surveillance system and use of surveillance data for local evaluation and program management. Conclusions. Timely and accurate surveillance can be achieved in the setting of restricted funding experienced by many LHDs. Adopting best surveillance practices may improve both efficiency and public health outcomes. PMID:24134385

  2. Phase retrieval and 3D imaging in gold nanoparticles based fluorescence microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh M.; Meir, Rinat; Zalevsky, Zeev

    2017-02-01

    Optical sectioning microscopy can provide highly detailed three dimensional (3D) images of biological samples. However, it requires acquisition of many images per volume, and is therefore time consuming, and may not be suitable for live cell 3D imaging. We propose the use of the modified Gerchberg-Saxton phase retrieval algorithm to enable full 3D imaging of gold nanoparticles tagged sample using only two images. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. The proposed concept is then further enhanced also for tracking of single fluorescent particles within a three dimensional (3D) cellular environment based on image processing algorithms that can significantly increases localization accuracy of the 3D point spread function in respect to regular Gaussian fitting. All proposed concepts are validated both on simulated data as well as experimentally.

  3. Novel Approaches to Improve Iris Recognition System Performance Based on Local Quality Evaluation and Feature Fusion

    PubMed Central

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243

  4. Novel approaches to improve iris recognition system performance based on local quality evaluation and feature fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.

  5. Localization of a variational particle smoother

    NASA Astrophysics Data System (ADS)

    Morzfeld, M.; Hodyss, D.; Poterjoy, J.

    2017-12-01

    Given the success of 4D-variational methods (4D-Var) in numerical weather prediction,and recent efforts to merge ensemble Kalman filters with 4D-Var,we consider a method to merge particle methods and 4D-Var.This leads us to revisit variational particle smoothers (varPS).We study the collapse of varPS in high-dimensional problemsand show how it can be prevented by weight-localization.We test varPS on the Lorenz'96 model of dimensionsn=40, n=400, and n=2000.In our numerical experiments, weight localization prevents the collapse of the varPS,and we note that the varPS yields results comparable to ensemble formulations of 4D-variational methods,while it outperforms EnKF with tuned localization and inflation,and the localized standard particle filter.Additional numerical experiments suggest that using localized weights in varPS may not yield significant advantages over unweighted or linearizedsolutions in near-Gaussian problems.

  6. Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.

    PubMed

    Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael

    2015-08-01

    In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.

  7. Settling velocity and preferential concentration of heavy particles under two-way coupling effects in homogeneous turbulence

    NASA Astrophysics Data System (ADS)

    Monchaux, R.; Dejoan, A.

    2017-10-01

    The settling velocity of inertial particles falling in homogeneous turbulence is investigated by making use of direct numerical simulation (DNS) at moderate Reynolds number that include momentum exchange between both phases (two-way coupling approach). Effects of particle volume fraction, particle inertia, and gravity are presented for flow and particle parameters similar to the experiments of Aliseda et al. [J. Fluid Mech. 468, 77 (2002), 10.1017/S0022112002001593]. A good agreement is obtained between the DNS and the experiments for the settling velocity statistics, when overall averaged, but as well when conditioned on the local particle concentration. Both DNS and experiments show that the settling velocity further increases with increasing volume fraction and local concentration. At the considered particle loading the effects of two-way coupling is negligible on the mean statistics of turbulence. Nevertheless, the DNS results show that fluid quantities are locally altered by the particles. In particular, the conditional average on the local particle concentration of the slip velocity shows that the main contribution to the settling enhancement results from the increase of the fluid velocity surrounding the particles along the gravitational direction induced by the collective particle back-reaction force. Particles and the surrounding fluid are observed to fall together, which in turn results in an amplification of the sampling of particles in the downward fluid motion. Effects of two-way coupling on preferential concentration are also reported. Increase of both volume fraction and gravity is shown to lower preferential concentration of small inertia particles while a reverse tendency is observed for large inertia particles. This behavior is found to be related to an attenuation of the centrifuge effects and to an increase of particle accumulation along gravity direction, as particle loading and gravity become large.

  8. Uniform and Janus-like nanoparticles in contact with vesicles: energy landscapes and curvature-induced forces.

    PubMed

    Agudo-Canalejo, Jaime; Lipowsky, Reinhard

    2017-03-15

    Biological membranes and lipid vesicles often display complex shapes with non-uniform membrane curvature. When adhesive nanoparticles with chemically uniform surfaces come into contact with such membranes, they exhibit four different engulfment regimes as recently shown by a systematic stability analysis. Depending on the local curvature of the membrane, the particles either remain free, become partially or completely engulfed by the membrane, or display bistability between free and completely engulfed states. Here, we go beyond stability analysis and develop an analytical theory to leading order in the ratio of particle-to-vesicle size. This theory allows us to determine the local and global energy landscapes of uniform nanoparticles that are attracted towards membranes and vesicles. While the local energy landscape depends only on the local curvature of the vesicle membrane and not on the overall membrane shape, the global energy landscape describes the variation of the equilibrium state of the particle as it probes different points along the membrane surface. In particular, we find that the binding energy of a partially engulfed particle depends on the 'unperturbed' local curvature of the membrane in the absence of the particle. This curvature dependence leads to local forces that pull the partially engulfed particles towards membrane segments with lower and higher mean curvature if the particles originate from the exterior and interior solution, respectively, corresponding to endo- and exocytosis. Thus, for partial engulfment, endocytic particles undergo biased diffusion towards the membrane segments with the lowest membrane curvature, whereas exocytic particles move towards segments with the highest curvature. The curvature-induced forces are also effective for Janus particles with one adhesive and one non-adhesive surface domain. In fact, Janus particles with a strongly adhesive surface domain are always partially engulfed which implies that they provide convenient probes for experimental studies of the curvature-induced forces that arise for complex-shaped membranes.

  9. SELF-GRAVITATIONAL FORCE CALCULATION OF SECOND-ORDER ACCURACY FOR INFINITESIMALLY THIN GASEOUS DISKS IN POLAR COORDINATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw

    Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less

  10. Increased accuracy of ligand sensing by receptor diffusion on cell surface

    NASA Astrophysics Data System (ADS)

    Aquino, Gerardo; Endres, Robert G.

    2010-10-01

    The physical limit with which a cell senses external ligand concentration corresponds to the perfect absorber, where all ligand particles are absorbed and overcounting of same ligand particles does not occur. Here, we analyze how the lateral diffusion of receptors on the cell membrane affects the accuracy of sensing ligand concentration. Specifically, we connect our modeling to neurotransmission in neural synapses where the diffusion of glutamate receptors is already known to refresh synaptic connections. We find that receptor diffusion indeed increases the accuracy of sensing for both the glutamate α -Amino-3-hydroxy-5-Methyl-4-isoxazolePropionic Acid (AMPA) and N -Methyl-D-aspartic Acid (NMDA) receptor, although the NMDA receptor is overall much noisier. We propose that the difference in accuracy of sensing of the two receptors can be linked to their different roles in neurotransmission. Specifically, the high accuracy in sensing glutamate is essential for the AMPA receptor to start membrane depolarization, while the NMDA receptor is believed to work in a second stage as a coincidence detector, involved in long-term potentiation and memory.

  11. An information-theoretic approach to designing the plane spacing for multifocal plane microscopy

    PubMed Central

    Tahmasbi, Amir; Ram, Sripad; Chao, Jerry; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.

    2015-01-01

    Multifocal plane microscopy (MUM) is a 3D imaging modality which enables the localization and tracking of single molecules at high spatial and temporal resolution by simultaneously imaging distinct focal planes within the sample. MUM overcomes the depth discrimination problem of conventional microscopy and allows high accuracy localization of a single molecule in 3D along the z-axis. An important question in the design of MUM experiments concerns the appropriate number of focal planes and their spacings to achieve the best possible 3D localization accuracy along the z-axis. Ideally, it is desired to obtain a 3D localization accuracy that is uniform over a large depth and has small numerical values, which guarantee that the single molecule is continuously detectable. Here, we address this concern by developing a plane spacing design strategy based on the Fisher information. In particular, we analyze the Fisher information matrix for the 3D localization problem along the z-axis and propose spacing scenarios termed the strong coupling and the weak coupling spacings, which provide appropriate 3D localization accuracies. Using these spacing scenarios, we investigate the detectability of the single molecule along the z-axis and study the effect of changing the number of focal planes on the 3D localization accuracy. We further review a software module we recently introduced, the MUMDesignTool, that helps to design the plane spacings for a MUM setup. PMID:26113764

  12. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  13. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less

  14. Using Light Scattering to Track, Characterize and Manipulate Colloids

    NASA Astrophysics Data System (ADS)

    van Oostrum, P. D. J.

    2011-03-01

    A new technique is developed to analyze in-line Digital Holographic Microscopy images, making it possible to characterize, and track colloidal particles in three dimensions at unprecedented accuracy. We took digital snapshots of the interference pattern between the light scattered by micrometer particles and the unaltered portion of a laser beam that was used to illuminate dilute colloidal dispersions on a light microscope in transmission mode. We numerically fit Mie-theory for the light-scattering by micrometer sized particles to these experimental in-line holograms. The fit values give the position in three dimensions with an accuracy of a few nanometers in the lateral directions and several tens of nanometers in the axial direction. The individual particles radii and refractive indices could be determined to within tens of nanometers and a few hundredths respectively. By using a fast CCD camera, we can track particles with millisecond resolution in time which allows us to study dynamical properties such as the hydrodynamic radius and the sedimentation coefficient. The scattering behavior of the particles that we use to track and characterize colloidal particles makes it possible to exert pico-Newton forces on them close to a diffraction limited focus. When these effects are used to confine colloids in space, this technique is called Optical Tweezers. Both by numerical calculations and by experiments, we explore the possibilities of optical tweezers in soft condensed matter research. Using optical tweezers we placed multiple particles in interesting configurations to measure the interaction forces between them. The interaction forces were Yukawa-like screened charge repulsions. Careful timing of the blinking of time-shared optical tweezers and of the recording of holographic snapshots, we were able to measure interaction forces with femto-Newton accuracy from an analysis of (driven) Brownian motion. Forces exerted by external fields such as electric fields and gravity were measured as well. We induced electric dipoles in colloidal particles by applying radio frequency electric fields. Dipole induced strings of particles were formed and made permanent by van der Waals attractions or thermal annealing. Such colloidal strings form colloidal analogues of charged and un-charged (bio-) polymers. The diffusion and bending behavior of such strings was probed using DHM and optical tweezers.

  15. Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique

    PubMed Central

    Jachimczyk, Bartosz; Dziak, Damian; Kulesza, Wlodek J.

    2017-01-01

    The increased potential and effectiveness of Real-time Locating Systems (RTLSs) substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA) localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system’s configuration and LS’s relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS’ localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision. PMID:28125056

  16. Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique.

    PubMed

    Jachimczyk, Bartosz; Dziak, Damian; Kulesza, Wlodek J

    2017-01-25

    The increased potential and effectiveness of Real-time Locating Systems (RTLSs) substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA) localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system's configuration and LS's relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS' localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision.

  17. Clinical Study of Orthogonal-View Phase-Matched Digital Tomosynthesis for Lung Tumor Localization.

    PubMed

    Zhang, You; Ren, Lei; Vergalasova, Irina; Yin, Fang-Fang

    2017-01-01

    Compared to cone-beam computed tomography, digital tomosynthesis imaging has the benefits of shorter scanning time, less imaging dose, and better mechanical clearance for tumor localization in radiation therapy. However, for lung tumors, the localization accuracy of the conventional digital tomosynthesis technique is affected by the lack of depth information and the existence of lung tumor motion. This study investigates the clinical feasibility of using an orthogonal-view phase-matched digital tomosynthesis technique to improve the accuracy of lung tumor localization. The proposed orthogonal-view phase-matched digital tomosynthesis technique benefits from 2 major features: (1) it acquires orthogonal-view projections to improve the depth information in reconstructed digital tomosynthesis images and (2) it applies respiratory phase-matching to incorporate patient motion information into the synthesized reference digital tomosynthesis sets, which helps to improve the localization accuracy of moving lung tumors. A retrospective study enrolling 14 patients was performed to evaluate the accuracy of the orthogonal-view phase-matched digital tomosynthesis technique. Phantom studies were also performed using an anthropomorphic phantom to investigate the feasibility of using intratreatment aggregated kV and beams' eye view cine MV projections for orthogonal-view phase-matched digital tomosynthesis imaging. The localization accuracy of the orthogonal-view phase-matched digital tomosynthesis technique was compared to that of the single-view digital tomosynthesis techniques and the digital tomosynthesis techniques without phase-matching. The orthogonal-view phase-matched digital tomosynthesis technique outperforms the other digital tomosynthesis techniques in tumor localization accuracy for both the patient study and the phantom study. For the patient study, the orthogonal-view phase-matched digital tomosynthesis technique localizes the tumor to an average (± standard deviation) error of 1.8 (0.7) mm for a 30° total scan angle. For the phantom study using aggregated kV-MV projections, the orthogonal-view phase-matched digital tomosynthesis localizes the tumor to an average error within 1 mm for varying magnitudes of scan angles. The pilot clinical study shows that the orthogonal-view phase-matched digital tomosynthesis technique enables fast and accurate localization of moving lung tumors.

  18. Direct position determination for digital modulation signals based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding

    2018-04-01

    The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.

  19. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE PAGES

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...

    2016-04-19

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  20. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingyu; Samulyak, Roman, E-mail: roman.samulyak@stonybrook.edu; Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  1. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  2. Multi-resolution MPS method

    NASA Astrophysics Data System (ADS)

    Tanaka, Masayuki; Cardoso, Rui; Bahai, Hamid

    2018-04-01

    In this work, the Moving Particle Semi-implicit (MPS) method is enhanced for multi-resolution problems with different resolutions at different parts of the domain utilising a particle splitting algorithm for the finer resolution and a particle merging algorithm for the coarser resolution. The Least Square MPS (LSMPS) method is used for higher stability and accuracy. Novel boundary conditions are developed for the treatment of wall and pressure boundaries for the Multi-Resolution LSMPS method. A wall is represented by polygons for effective simulations of fluid flows with complex wall geometries and the pressure boundary condition allows arbitrary inflow and outflow, making the method easier to be used in flow simulations of channel flows. By conducting simulations of channel flows and free surface flows, the accuracy of the proposed method was verified.

  3. Lattice Boltzmann model capable of mesoscopic vorticity computation

    NASA Astrophysics Data System (ADS)

    Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping

    2017-11-01

    It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.

  4. Axial Colocalization of Single Molecules with Nanometer Accuracy Using Metal-Induced Energy Transfer.

    PubMed

    Isbaner, Sebastian; Karedla, Narain; Kaminska, Izabela; Ruhlandt, Daja; Raab, Mario; Bohlen, Johann; Chizhik, Alexey; Gregor, Ingo; Tinnefeld, Philip; Enderlein, Jörg; Tsukanov, Roman

    2018-04-11

    Single-molecule localization based super-resolution microscopy has revolutionized optical microscopy and routinely allows for resolving structural details down to a few nanometers. However, there exists a rather large discrepancy between lateral and axial localization accuracy, the latter typically three to five times worse than the former. Here, we use single-molecule metal-induced energy transfer (smMIET) to localize single molecules along the optical axis, and to measure their axial distance with an accuracy of 5 nm. smMIET relies only on fluorescence lifetime measurements and does not require additional complex optical setups.

  5. A Hybrid Indoor Localization and Navigation System with Map Matching for Pedestrians Using Smartphones.

    PubMed

    Tian, Qinglin; Salcic, Zoran; Wang, Kevin I-Kai; Pan, Yun

    2015-12-05

    Pedestrian dead reckoning is a common technique applied in indoor inertial navigation systems that is able to provide accurate tracking performance within short distances. Sensor drift is the main bottleneck in extending the system to long-distance and long-term tracking. In this paper, a hybrid system integrating traditional pedestrian dead reckoning based on the use of inertial measurement units, short-range radio frequency systems and particle filter map matching is proposed. The system is a drift-free pedestrian navigation system where position error and sensor drift is regularly corrected and is able to provide long-term accurate and reliable tracking. Moreover, the whole system is implemented on a commercial off-the-shelf smartphone and achieves real-time positioning and tracking performance with satisfactory accuracy.

  6. Swarm-wavelet based extreme learning machine for finger movement classification on transradial amputees.

    PubMed

    Anam, Khairul; Al-Jumaily, Adel

    2014-01-01

    The use of a small number of surface electromyography (EMG) channels on the transradial amputee in a myoelectric controller is a big challenge. This paper proposes a pattern recognition system using an extreme learning machine (ELM) optimized by particle swarm optimization (PSO). PSO is mutated by wavelet function to avoid trapped in a local minima. The proposed system is used to classify eleven imagined finger motions on five amputees by using only two EMG channels. The optimal performance of wavelet-PSO was compared to a grid-search method and standard PSO. The experimental results show that the proposed system is the most accurate classifier among other tested classifiers. It could classify 11 finger motions with the average accuracy of about 94 % across five amputees.

  7. Localization and physical property experiments conducted by opportunity at Meridiani Planum

    USGS Publications Warehouse

    Arvidson, R. E.; Anderson, R.C.; Bartlett, P.; Bell, J.F.; Christensen, P.R.; Chu, P.; Davis, K.; Ehlmann, B.L.; Golombek, M.P.; Gorevan, S.; Guinness, E.A.; Haldemann, A.F.C.; Herkenhoff, K. E.; Landis, G.; Li, R.; Lindemann, R.; Ming, D. W.; Myrick, T.; Parker, T.; Richter, L.; Seelos, F.P.; Soderblom, L.A.; Squyres, S. W.; Sullivan, R.J.; Wilson, Jim

    2004-01-01

    The location of the Opportunity landing site was determined to better than 10-m absolute accuracy from analyses of radio tracking data. We determined Rover locations during traverses with an error as small as several centimeters using engineering telemetry and overlapping images. Topographic profiles generated from rover data show that the plains are very smooth from meter- to centimeter-length scales, consistent with analyses of orbital observations. Solar cell output decreased because of the deposition of airborne dust on the panels. The lack of dust-covered surfaces on Meridiani Planum indicates that high velocity winds must remove this material on a continuing basis. The low mechanical strength of the evaporitic rocks as determined from grinding experiments, and the abundance of coarse-grained surface particles argue for differential erosion of Meridiani Planum.

  8. Localization and physical property experiments conducted by Opportunity at Meridiani Planum.

    PubMed

    Arvidson, R E; Anderson, R C; Bartlett, P; Bell, J F; Christensen, P R; Chu, P; Davis, K; Ehlmann, B L; Golombek, M P; Gorevan, S; Guinness, E A; Haldemann, A F C; Herkenhoff, K E; Landis, G; Li, R; Lindemann, R; Ming, D W; Myrick, T; Parker, T; Richter, L; Seelos, F P; Soderblom, L A; Squyres, S W; Sullivan, R J; Wilson, J

    2004-12-03

    The location of the Opportunity landing site was determined to better than 10-m absolute accuracy from analyses of radio tracking data. We determined Rover locations during traverses with an error as small as several centimeters using engineering telemetry and overlapping images. Topographic profiles generated from rover data show that the plains are very smooth from meter- to centimeter-length scales, consistent with analyses of orbital observations. Solar cell output decreased because of the deposition of airborne dust on the panels. The lack of dust-covered surfaces on Meridiani Planum indicates that high velocity winds must remove this material on a continuing basis. The low mechanical strength of the evaporitic rocks as determined from grinding experiments, and the abundance of coarse-grained surface particles argue for differential erosion of Meridiani Planum.

  9. Expansion moments for the local field distribution that involve the three-particle distribution function

    NASA Astrophysics Data System (ADS)

    Attard, Phil

    The second moment of the Lennard-Jones local field distribution in a hard-sphere fluid is evaluated using the PY3 three-particle distribution function. An approximation due to Lado that avoids the explicit calculation of the latter is shown to be accurate. Partial results are also given for certain cavity-hard-sphere radial distribution functions that occur in a closest particle expansion for the local field.

  10. Spatially Localized Particle Energization by Landau Damping in Current Sheets

    NASA Astrophysics Data System (ADS)

    Howes, G. G.; Klein, K. G.; McCubbin, A. J.

    2017-12-01

    Understanding the mechanisms of particle energization through the removal of energy from turbulent fluctuations in heliospheric plasmas is a grand challenge problem in heliophysics. Under the weakly collisional conditions typical of heliospheric plasma, kinetic mechanisms must be responsible for this energization, but the nature of those mechanisms remains elusive. In recent years, the spatial localization of plasma heating near current sheets in the solar wind and numerical simulations has gained much attention. Here we show, using the innovative and new field-particle correlation technique, that the spatially localized particle energization occurring in a nonlinear gyrokinetic simulation has the velocity space signature of Landau damping, suggesting that this well-known collisionless damping mechanism indeed actively leads to spatially localized heating in the vicinity of current sheets.

  11. Analysis on accuracy improvement of rotor-stator rubbing localization based on acoustic emission beamforming method.

    PubMed

    He, Tian; Xiao, Denghong; Pan, Qiang; Liu, Xiandong; Shan, Yingchun

    2014-01-01

    This paper attempts to introduce an improved acoustic emission (AE) beamforming method to localize rotor-stator rubbing fault in rotating machinery. To investigate the propagation characteristics of acoustic emission signals in casing shell plate of rotating machinery, the plate wave theory is used in a thin plate. A simulation is conducted and its result shows the localization accuracy of beamforming depends on multi-mode, dispersion, velocity and array dimension. In order to reduce the effect of propagation characteristics on the source localization, an AE signal pre-process method is introduced by combining plate wave theory and wavelet packet transform. And the revised localization velocity to reduce effect of array size is presented. The accuracy of rubbing localization based on beamforming and the improved method of present paper are compared by the rubbing test carried on a test table of rotating machinery. The results indicate that the improved method can localize rub fault effectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. A unified gas-kinetic scheme for continuum and rarefied flows IV: Full Boltzmann and model equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chang, E-mail: cliuaa@ust.hk; Xu, Kun, E-mail: makxu@ust.hk; Sun, Quanhua, E-mail: qsun@imech.ac.cn

    Fluid dynamic equations are valid in their respective modeling scales, such as the particle mean free path scale of the Boltzmann equation and the hydrodynamic scale of the Navier–Stokes (NS) equations. With a variation of the modeling scales, theoretically there should have a continuous spectrum of fluid dynamic equations. Even though the Boltzmann equation is claimed to be valid in all scales, many Boltzmann solvers, including direct simulation Monte Carlo method, require the cell resolution to the order of particle mean free path scale. Therefore, they are still single scale methods. In order to study multiscale flow evolution efficiently, themore » dynamics in the computational fluid has to be changed with the scales. A direct modeling of flow physics with a changeable scale may become an appropriate approach. The unified gas-kinetic scheme (UGKS) is a direct modeling method in the mesh size scale, and its underlying flow physics depends on the resolution of the cell size relative to the particle mean free path. The cell size of UGKS is not limited by the particle mean free path. With the variation of the ratio between the numerical cell size and local particle mean free path, the UGKS recovers the flow dynamics from the particle transport and collision in the kinetic scale to the wave propagation in the hydrodynamic scale. The previous UGKS is mostly constructed from the evolution solution of kinetic model equations. Even though the UGKS is very accurate and effective in the low transition and continuum flow regimes with the time step being much larger than the particle mean free time, it still has space to develop more accurate flow solver in the region, where the time step is comparable with the local particle mean free time. In such a scale, there is dynamic difference from the full Boltzmann collision term and the model equations. This work is about the further development of the UGKS with the implementation of the full Boltzmann collision term in the region where it is needed. The central ingredient of the UGKS is the coupled treatment of particle transport and collision in the flux evaluation across a cell interface, where a continuous flow dynamics from kinetic to hydrodynamic scales is modeled. The newly developed UGKS has the asymptotic preserving (AP) property of recovering the NS solutions in the continuum flow regime, and the full Boltzmann solution in the rarefied regime. In the mostly unexplored transition regime, the UGKS itself provides a valuable tool for the non-equilibrium flow study. The mathematical properties of the scheme, such as stability, accuracy, and the asymptotic preserving, will be analyzed in this paper as well.« less

  13. Water Flow Simulation using Smoothed Particle Hydrodynamics (SPH)

    NASA Technical Reports Server (NTRS)

    Vu, Bruce; Berg, Jared; Harris, Michael F.

    2014-01-01

    Simulation of water flow from the rainbird nozzles has been accomplished using the Smoothed Particle Hydrodynamics (SPH). The advantage of using SPH is that no meshing is required, thus the grid quality is no longer an issue and accuracy can be improved.

  14. [The underwater and airborne horizontal localization of sound by the northern fur seal].

    PubMed

    Babushina, E S; Poliakov, M A

    2004-01-01

    The accuracy of the underwater and airborne horizontal localization of different acoustic signals by the northern fur seal was investigated by the method of instrumental conditioned reflexes with food reinforcement. For pure-tone pulsed signals in the frequency range of 0.5-25 kHz the minimum angles of sound localization at 75% of correct responses corresponded to sound transducer azimuth of 6.5-7.5 degrees +/- 0.1-0.4 degrees underwater (at impulse duration of 3-90 ms) and of 3.5-5.5 degrees +/- 0.05-0.5 degrees in air (at impulse duration of 3-160 ms). The source of pulsed noise signals (of 3-ms duration) was localized with the accuracy of 3.0 degrees +/- 0.2 degrees underwater. The source of continuous (of 1-s duration) narrow band (10% of c.fr.) noise signals was localized in air with the accuracy of 2-5 degrees +/- 0.02-0.4 degrees and of continuous broad band (1-20 kHz) noise, with the accuracy of 4.5 degrees +/- 0.2 degrees.

  15. Resonance-Based Detection of Magnetic Nanoparticles and Microbeads Using Nanopatterned Ferromagnets

    NASA Astrophysics Data System (ADS)

    Sushruth, Manu; Ding, Junjia; Duczynski, Jeremy; Woodward, Robert C.; Begley, Ryan A.; Fangohr, Hans; Fuller, Rebecca O.; Adeyeye, Adekunle O.; Kostylev, Mikhail; Metaxas, Peter J.

    2016-10-01

    Biosensing with ferromagnet-based magnetoresistive devices has been dominated by electrical detection of particle-induced changes to a device's (quasi-)static magnetic configuration. There are however potential advantages to be gained from using field dependent, high frequency resonant magnetization dynamics for magnetic particle detection. Here, we demonstrate the use of nanoconfined ferromagnetic resonances in periodically nanopatterned magnetic films for the detection of adsorbed magnetic particles having diameters ranging from 6 nm to 4 μ m . The nanopatterned films contain arrays of holes which appear to act as preferential adsorption sites for small particles. Hole-localized particles act in unison to shift the frequencies of the patterned layer's ferromagnetic-resonance modes, with shift polarities determined by the localization of each mode within the nanopattern's repeating unit cell. The same polarity shifts are observed for a large range of coverages, even when quasicontinuous particle sheets form above the hole-localized particles. For large particles, preferential adsorption no longer occurs, leading to resonance shifts with polarities that are independent of the mode localization, and amplitudes that are comparable to those seen in continuous layers. Indeed, for nanoparticles adsorbed onto a continuous layer, the particle-induced shift of the layer's fundamental mode is up to 10 times less than that observed for nanoconfined modes in the nanopatterned systems, the low shift being induced by relatively weak fields emanating beyond the particle in the direction of the static applied field. This result highlights the importance of having particles consistently positioned in the close vicinity of confined modes.

  16. The effect of transponder motion on the accuracy of the Calypso Electromagnetic localization system.

    PubMed

    Murphy, Martin J; Eidens, Richard; Vertatschitsch, Edward; Wright, J Nelson

    2008-09-01

    To determine position and velocity-dependent effects in the overall accuracy of the Calypso Electromagnetic localization system, under conditions that emulate transponder motion during normal free breathing. Three localization transponders were mounted on a remote-controlled turntable that could move the transponders along a circular trajectory at speeds up to 3 cm/s. A stationary calibration established the coordinates of multiple points on each transponder's circular path. Position measurements taken while the transponders were in motion at a constant speed were then compared with the stationary coordinates. No statistically significant changes in the transponder positions in (x,y,z) were detected when the transponders were in motion. The accuracy of the localization system is unaffected by transponder motion.

  17. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  18. Towards active image-guidance: tracking of a fiducial in the thorax during respiration under X-ray fluoroscopy

    NASA Astrophysics Data System (ADS)

    Siddique, Sami; Jaffray, David

    2007-03-01

    A central purpose of image-guidance is to assist the interventionalist with feedback of geometric performance in the direction of therapy delivery. Tradeoffs exist between accuracy, precision and the constraints imposed by parameters used in the generation of images. A framework that uses geometric performance as feedback to control these parameters can balance such tradeoffs in order to maintain the requisite localization precision for a given clinical procedure. We refer to this principle as Active Image-Guidance (AIG). This framework requires estimates of the uncertainty in the estimated location of the object of interest. In this study, a simple fiducial marker detected under X-ray fluoroscopy is considered and it is shown that a relation exists between the applied imaging dose and the uncertainty in localization for a given observer. A robust estimator of the location of a fiducial in the thorax during respiration under X-ray fluoroscopy is demonstrated using a particle filter based approach that outputs estimates of the location and the associated spatial uncertainty. This approach gives an rmse of 1.3mm and the uncertainty estimates are found to be correlated with the error in the estimates. Furthermore, the particle filtering approach is employed to output location estimates and the associated uncertainty not only at instances of pulsed exposure but also between exposures. Such a system has applications in image-guided interventions (surgery, radiotherapy, interventional radiology) where there are latencies between the moment of imaging and the act of intervention.

  19. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  20. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    PubMed Central

    Juric, Matjaz B.

    2018-01-01

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage. PMID:29587352

  1. Local staging and assessment of colon cancer with 1.5-T magnetic resonance imaging

    PubMed Central

    Blake, Helena; Jeyadevan, Nelesh; Abulafi, Muti; Swift, Ian; Toomey, Paul; Brown, Gina

    2016-01-01

    Objective: The aim of this study was to assess the accuracy of 1.5-T MRI in the pre-operative local T and N staging of colon cancer and identification of extramural vascular invasion (EMVI). Methods: Between 2010 and 2012, 60 patients with adenocarcinoma of the colon were prospectively recruited at 2 centres. 55 patients were included for final analysis. Patients received pre-operative 1.5-T MRI with high-resolution T2 weighted, gadolinium-enhanced T1 weighted and diffusion-weighted images. These were blindly assessed by two expert radiologists. Accuracy of the T-stage, N-stage and EMVI assessment was evaluated using post-operative histology as the gold standard. Results: Results are reported for two readers. Identification of T3 disease demonstrated an accuracy of 71% and 51%, sensitivity of 74% and 42% and specificity of 74% and 83%. Identification of N1 disease demonstrated an accuracy of 57% for both readers, sensitivity of 26% and 35% and specificity of 81% and 74%. Identification of EMVI demonstrated an accuracy of 74% and 69%, sensitivity 63% and 26% and specificity 80% and 91%. Conclusion: 1.5-T MRI achieved a moderate accuracy in the local evaluation of colon cancer, but cannot be recommended to replace CT on the basis of this study. Advances in knowledge: This study confirms that MRI is a viable alternative to CT for the local assessment of colon cancer, but this study does not reproduce the very high accuracy reported in the only other study to assess the accuracy of MRI in colon cancer staging. PMID:27226219

  2. The application of micro-vacuo-certo-contacting ophthalmophanto in X-ray radiosurgery for tumors in an eyeball.

    PubMed

    Li, Shuying; Wang, Yunyan; Hu, Likuan; Liang, Yingchun; Cai, Jing

    2014-11-01

    The large errors of routine localization for eyeball tumors restricted X-ray radiosurgery application, just for the eyeball to turn around. To localize the accuracy site, the micro-vacuo-certo-contacting ophthalmophanto (MVCCOP) method was used. Also, the outcome of patients with tumors in the eyeball was evaluated. In this study, computed tomography (CT) localization accuracy was measured by repeating CT scan using MVCCOP to fix the eyeball in radiosurgery. This study evaluated the outcome of the tumors and the survival of the patients by follow-up. The results indicated that the accuracy of CT localization of Brown-Roberts-Wells (BRW) head ring was 0.65 mm and maximum error was 1.09 mm. The accuracy of target localization of tumors in the eyeball using MVCCOP was 0.87 mm averagely, and the maximum error was 1.19 mm. The errors of fixation of the eyeball were 0.84 mm averagely and 1.17 mm maximally. The total accuracy was 1.34 mm, and 95% confidence accuracy was 2.09 mm. The clinical application of this method in 14 tumor patients showed satisfactory results, and all of the tumors showed the clear rims. The site of ten retinoblastomas was decreased significantly. The local control interval of tumors were 6 ∼ 24 months, median of 10.5 months. The survival of ten patients was 7 ∼ 30 months, median of 16.5 months. Also, the tumors were kept stable or shrank in the other four patients with angioma and melanoma. In conclusion, the MVCCOP is suitable and dependable for X-ray radiosurgery for eyeball tumors. The tumor control and survival of patients are satisfactory, and this method can effectively postpone or avoid extirpation of eyeball.

  3. High-speed (20 kHz) digital in-line holography for transient particle tracking and sizing in multiphase flows

    DOE PAGES

    Guildenbecher, Daniel R.; Cooper, Marcia A.; Sojka, Paul E.

    2016-04-05

    High-speed (20 kHz) digital in-line holography (DIH) is applied for 3D quantification of the size and velocity of fragments formed from the impact of a single water drop onto a thin film of water and burning aluminum particles from the combustion of a solid rocket propellant. To address the depth-of-focus problem in DIH, a regression-based multiframe tracking algorithm is employed, and out-of-plane experimental displacement accuracy is shown to be improved by an order-of-magnitude. Comparison of the results with previous DIH measurements using low-speed recording shows improved positional accuracy with the added advantage of detailed resolution of transient dynamics from singlemore » experimental realizations. Furthermore, the method is shown to be particularly advantageous for quantification of particle mass flow rates. For the investigated particle fields, the mass flows rates, which have been automatically measured from single experimental realizations, are found to be within 8% of the expected values.« less

  4. Effects of morphology and wavelength on the measurement accuracy of soot volume fraction by laser extinction

    NASA Astrophysics Data System (ADS)

    Wang, Ya-fei; Huang, Qun-xing; Wang, Fei; Chi, Yong; Yan, Jian-hua

    2018-01-01

    A novel method to evaluate the quantitative effects of soot morphology and incident wavelength on the measurement accuracy of soot volume fraction, by the laser extinction (LE) technique is proposed in this paper. The results indicate that the traditional LE technique would overestimate soot volume fraction if the effects of morphology and wavelength are not considered. Before the agglomeration of isolated soot primary particles, the overestimation of the LE technique is in the range of 2-20%, and rises with increasing primary particle diameter and with decreasing incident wavelength. When isolated primary particles are agglomerated into fractal soot aggregates, the overestimation would exceed 30%, and rise with increasing primary particle number per soot aggregate, fractal dimension and fractal prefactor and with decreasing incident wavelength to a maximum value of 55%. Finally, based on these results above, the existing formula of the LE technique gets modified, and the modification factor is 0.65-0.77.

  5. A study of hierarchical clustering of galaxies in an expanding universe

    NASA Astrophysics Data System (ADS)

    Porter, D. H.

    The nonlinear hierarchical clustering of galaxies in an Einstein-deSitter (Omega = 1), initially white noise mass fluctuations (n = 0) model universe is investigated and shown to be in contradiction with previous results. The model is done in terms of an 11,000-body numerical simulation. The independent statics of 0.72 million particles are used to simulte the boundary conditions. A new method for integrating the Newtonian N-body gravity equations, which has controllable accuracy, incorporates a recursive center of mass reduction, and regularizes two body encounters is used to do the simulation. The coordinate system used here is well suited for the investigation of galaxy clustering, incorporating the independent positions and velocities of an arbitrary number of particles into a logarithmic hierarchy of center of mass nodes. The boundary for the simulation is created by using this hierarchy to map the independent statics of 0.72 million particles into just 4,000 particles. This method for simulating the boundary conditions also has controllable accuracy.

  6. Formulation to target delivery to the ciliary body and choroid via the suprachoroidal space of the eye using microneedles.

    PubMed

    Kim, Yoo Chun; Oh, Kyung Hee; Edelhauser, Henry F; Prausnitz, Mark R

    2015-09-01

    In this work, we tested the hypothesis that particles injected into the suprachoroidal space can be localized at the site of injection or broadly distributed throughout the suprachoroidal space by controlling polymeric formulation properties. Single hollow microneedles were inserted into the sclera of New Zealand White rabbits and injected non-biodegradable fluorescently tagged nanoparticles and microparticles suspended in polymeric formulations into the suprachoroidal space of the eye. When formulated in saline, the particles were distributed over 29-42% of the suprachoroidal space immediately after injection. To spread particles over larger areas of the choroidal surface, addition of hyaluronic acid to make moderately non-Newtonian solutions increased particle spread to up to 100% of the suprachoroidal space. To localize particles at the site of injection adjacent to the ciliary body, strongly non-Newtonian polymer solutions localized particles to 8.3-20% of the suprachoroidal space, which exhibited a small increase in area over the course of two months. This study demonstrates targeted particle delivery within the suprachoroidal space using polymer formulations that spread particles over the whole choroidal surface or localized them adjacent to the ciliary body after injection. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. New methods to detect particle velocity and mass flux in arc-heated ablation/erosion facilities

    NASA Technical Reports Server (NTRS)

    Brayton, D. B.; Bomar, B. W.; Seibel, B. L.; Elrod, P. D.

    1980-01-01

    Arc-heated flow facilities with injected particles are used to simulate the erosive and ablative/erosive environments encountered by spacecraft re-entry through fog, clouds, thermo-nuclear explosions, etc. Two newly developed particle diagnostic techniques used to calibrate these facilities are discussed. One technique measures particle velocity and is based on the detection of thermal radiation and/or chemiluminescence from the hot seed particles in a model ablation/erosion facility. The second technique measures a local particle rate, which is proportional to local particle mass flux, in a dust erosion facility by photodetecting and counting the interruptions of a focused laser beam by individual particles.

  8. Incremental social learning in particle swarms.

    PubMed

    de Oca, Marco A Montes; Stutzle, Thomas; Van den Enden, Ken; Dorigo, Marco

    2011-04-01

    Incremental social learning (ISL) was proposed as a way to improve the scalability of systems composed of multiple learning agents. In this paper, we show that ISL can be very useful to improve the performance of population-based optimization algorithms. Our study focuses on two particle swarm optimization (PSO) algorithms: a) the incremental particle swarm optimizer (IPSO), which is a PSO algorithm with a growing population size in which the initial position of new particles is biased toward the best-so-far solution, and b) the incremental particle swarm optimizer with local search (IPSOLS), in which solutions are further improved through a local search procedure. We first derive analytically the probability density function induced by the proposed initialization rule applied to new particles. Then, we compare the performance of IPSO and IPSOLS on a set of benchmark functions with that of other PSO algorithms (with and without local search) and a random restart local search algorithm. Finally, we measure the benefits of using incremental social learning on PSO algorithms by running IPSO and IPSOLS on problems with different fitness distance correlations.

  9. Interaction between colloidal particles on an oil-water interface in dilute and dense phases.

    PubMed

    Parolini, Lucia; Law, Adam D; Maestro, Armando; Buzza, D Martin A; Cicuta, Pietro

    2015-05-20

    The interaction between micron-sized charged colloidal particles at polar/non-polar liquid interfaces remains surprisingly poorly understood for a relatively simple physical chemistry system. By measuring the pair correlation function g(r) for different densities of polystyrene particles at the decane-water interface, and using a powerful predictor-corrector inversion scheme, effective pair-interaction potentials can be obtained up to fairly high densities, and these reproduce the experimental g(r) in forward simulations, so are self consistent. While at low densities these potentials agree with published dipole-dipole repulsion, measured by various methods, an apparent density dependence and long range attraction are obtained when the density is higher. This condition is thus explored in an alternative fashion, measuring the local mobility of colloids when confined by their neighbors. This method of extracting interaction potentials gives results that are consistent with dipolar repulsion throughout the concentration range, with the same magnitude as in the dilute limit. We are unable to rule out the density dependence based on the experimental accuracy of our data, but we show that incomplete equilibration of the experimental system, which would be possible despite long waiting times due to the very strong repulsions, is a possible cause of artefacts in the inverted potentials. We conclude that to within the precision of these measurements, the dilute pair potential remains valid at high density in this system.

  10. On modeling weak sinks in MODPATH

    USGS Publications Warehouse

    Abrams, Daniel B.; Haitjema, Henk; Kauffman, Leon J.

    2012-01-01

    Regional groundwater flow systems often contain both strong sinks and weak sinks. A strong sink extracts water from the entire aquifer depth, while a weak sink lets some water pass underneath or over the actual sink. The numerical groundwater flow model MODFLOW may allow a sink cell to act as a strong or weak sink, hence extracting all water that enters the cell or allowing some of that water to pass. A physical strong sink can be modeled by either a strong sink cell or a weak sink cell, with the latter generally occurring in low resolution models. Likewise, a physical weak sink may also be represented by either type of sink cell. The representation of weak sinks in the particle tracing code MODPATH is more equivocal than in MODFLOW. With the appropriate parameterization of MODPATH, particle traces and their associated travel times to weak sink streams can be modeled with adequate accuracy, even in single layer models. Weak sink well cells, on the other hand, require special measures as proposed in the literature to generate correct particle traces and individual travel times and hence capture zones. We found that the transit time distributions for well water generally do not require special measures provided aquifer properties are locally homogeneous and the well draws water from the entire aquifer depth, an important observation for determining the response of a well to non-point contaminant inputs.

  11. Analysis of nanoparticles using photonic nanojet

    NASA Astrophysics Data System (ADS)

    Li, Xu; Chen, Zhigang; Siegel, Michael P.; Taflove, Allen; Backman, Vadim

    2005-04-01

    A photonic nanojet is a local field enhancement generated in the vicinity of a properly chosen microsphere or microcylinder illuminated by a collimated light beam. These photonic nanojets have waists smaller than the diffraction limit and propagate over several optical wavelengths without significant diffraction. We investigate the properties of photonic nanojets using rigorous solutions of Maxwell"s equations. A remarkable property we have found is that they can significantly enhance the backscattering of light by nanometer-scale particles (as small as ~1 nm) located within the jets. The enhancement factor for the backscattering intensity can be as high as five to six orders of magnitude. As a result, the observed intensity of the backscattered light from the dielectric microsphere can be substantially altered due to the presence of a nanoparticle within the light jet. Furthermore, the intensity and angular distribution of the backscattered signal is extremely sensitive to the size of the nanoparticle, which may enable differentiating particles with accuracy up to 1 nm. These properties of photonic nanojets make them an ideal tool for detecting, differentiating and sorting nanoparticles, which is of immense necessity for the field of nano-biotechnology. For example, they could yield potential novel ultramicroscopy techniques using visible light for detecting proteins, viral particles, and even single molecules; and monitoring molecular synthesis and aggregation processes of importance in many areas of biology, chemistry, material sciences, and tissue engineering.

  12. Boundary effect on the elastic field of a semi-infinite solid containing inhomogeneities

    PubMed Central

    Liu, Y. J.; Song, G.; Yin, H. M.

    2015-01-01

    The boundary effect of one inhomogeneity embedded in a semi-infinite solid at different depths has firstly been investigated using the fundamental solution for Mindlin's problem. Expanding the eigenstrain in a polynomial form and using the Eshelby's equivalent inclusion method, one can calculate the eigenstrain and thus obtain the elastic field. When the inhomogeneity is far from the boundary, the solution recovers Eshelby's solution. The method has been extended to a many-particle system in a semi-infinite solid, which is first demonstrated by the cases of two spheres. The comparison of the asymptotic form solution with the finite-element results shows the accuracy and capability of this method. The solution has been used to illustrate the boundary effects on its effective material behaviour of a semi-infinite simple cubic lattice particulate composite. The local field of a semi-infinite composite has been calculated at different volume fractions. A representative unit cell has been taken with different depths to the surface. The average stress and strain of the unit cell have been calculated under uniform loading conditions of normal or shear force on the surface, respectively. The effective elastic moduli of the unit cell not only depend on the material proportion, but also on its distance to the surface. The present model can be extended to other types of particle distribution and ellipsoidal particles. PMID:26345084

  13. Boundary effect on the elastic field of a semi-infinite solid containing inhomogeneities.

    PubMed

    Liu, Y J; Song, G; Yin, H M

    2015-07-08

    The boundary effect of one inhomogeneity embedded in a semi-infinite solid at different depths has firstly been investigated using the fundamental solution for Mindlin's problem. Expanding the eigenstrain in a polynomial form and using the Eshelby's equivalent inclusion method, one can calculate the eigenstrain and thus obtain the elastic field. When the inhomogeneity is far from the boundary, the solution recovers Eshelby's solution. The method has been extended to a many-particle system in a semi-infinite solid, which is first demonstrated by the cases of two spheres. The comparison of the asymptotic form solution with the finite-element results shows the accuracy and capability of this method. The solution has been used to illustrate the boundary effects on its effective material behaviour of a semi-infinite simple cubic lattice particulate composite. The local field of a semi-infinite composite has been calculated at different volume fractions. A representative unit cell has been taken with different depths to the surface. The average stress and strain of the unit cell have been calculated under uniform loading conditions of normal or shear force on the surface, respectively. The effective elastic moduli of the unit cell not only depend on the material proportion, but also on its distance to the surface. The present model can be extended to other types of particle distribution and ellipsoidal particles.

  14. Development of proton computed tomography detectors for applications in hadron therapy

    NASA Astrophysics Data System (ADS)

    Bashkirov, Vladimir A.; Johnson, Robert P.; Sadrozinski, Hartmut F.-W.; Schulte, Reinhard W.

    2016-02-01

    Radiation therapy with protons and heavier ions is an attractive form of cancer treatment that could enhance local control and survival of cancers that are currently difficult to cure and lead to less side effects due to sparing of normal tissues. However, particle therapy faces a significant technical challenge because one cannot accurately predict the particle range in the patient using data provided by existing imaging technologies. Proton computed tomography (pCT) is an emerging imaging modality capable of improving the accuracy of range prediction. In this paper, we describe the successive pCT scanners designed and built by our group with the goal to support particle therapy treatment planning and image guidance by reconstructing an accurate 3D map of the stopping power relative to water in patient tissues. The pCT scanners we have built to date consist of silicon telescopes, which track the proton before and after the object to be reconstructed, and an energy or range detector, which measures the residual energy and/or range of the protons used to evaluate the water equivalent path length (WEPL) of each proton in the object. An overview of a decade-long evolution of the conceptual design of pCT scanners and their calibration is given. Results of scanner performance tests are presented, which demonstrate that the latest pCT scanner approaches readiness for clinical applications in hadron therapy.

  15. Testing and Improving Theories of Radiative Transfer for Determining the Mineralogy of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Gudmundsson, E.; Ehlmann, B. L.; Mustard, J. F.; Hiroi, T.; Poulet, F.

    2012-12-01

    Two radiative transfer theories, the Hapke and Shkuratov models, have been used to estimate the mineralogic composition of laboratory mixtures of anhydrous mafic minerals from reflected near-infrared light, accurately modeling abundances to within 10%. For this project, we tested the efficacy of the Hapke model for determining the composition of mixtures (weight fraction, particle diameter) containing hydrous minerals, including phyllosilicates. Modal mineral abundances for some binary mixtures were modeled to +/-10% of actual values, but other mixtures showed higher inaccuracies (up to 25%). Consequently, a sensitivity analysis of selected input and model parameters was performed. We first examined the shape of the model's error function (RMS error between modeled and measured spectra) over a large range of endmember weight fractions and particle diameters and found that there was a single global minimum for each mixture (rather than local minima). The minimum was sensitive to modeled particle diameter but comparatively insensitive to modeled endmember weight fraction. Derivation of the endmembers' k optical constant spectra using the Hapke model showed differences with the Shkuratov-derived optical constants originally used. Model runs with different sets of optical constants suggest that slight differences in the optical constants used significantly affect the accuracy of model predictions. Even for mixtures where abundance was modeled correctly, particle diameter agreed inconsistently with sieved particle sizes and varied greatly for individual mix within suite. Particle diameter was highly sensitive to the optical constants, possibly indicating that changes in modeled path length (proportional to particle diameter) compensate for changes in the k optical constant. Alternatively, it may not be appropriate to model path length and particle diameter with the same proportionality for all materials. Across mixtures, RMS error increased in proportion to the fraction of the darker endmember. Analyses are ongoing and further studies will investigate the effect of sample hydration, permitted variability in particle size, assumed photometric functions and use of different wavelength ranges on model results. Such studies will advance understanding of how to best apply radiative transfer modeling to geologically complex planetary surfaces. Corresponding authors: eyjolfur88@gmail.com, ehlmann@caltech.edu

  16. Local Discontinuous Galerkin (LDG) Method for Advection of Active Compositional Fields with Discontinuous Boundaries: Demonstration and Comparison with Other Methods in the Mantle Convection Code ASPECT

    NASA Astrophysics Data System (ADS)

    He, Y.; Billen, M. I.; Puckett, E. G.

    2015-12-01

    Flow in the Earth's mantle is driven by thermo-chemical convection in which the properties and geochemical signatures of rocks vary depending on their origin and composition. For example, tectonic plates are composed of compositionally-distinct layers of crust, residual lithosphere and fertile mantle, while in the lower-most mantle there are large compositionally distinct "piles" with thinner lenses of different material. Therefore, tracking of active or passive fields with distinct compositional, geochemical or rheologic properties is important for incorporating physical realism into mantle convection simulations, and for investigating the long term mixing properties of the mantle. The difficulty in numerically advecting fields arises because they are non-diffusive and have sharp boundaries, and therefore require different methods than usually used for temperature. Previous methods for tracking fields include the marker-chain, tracer particle, and field-correction (e.g., the Lenardic Filter) methods: each of these has different advantages or disadvantages, trading off computational speed with accuracy in tracking feature boundaries. Here we present a method for modeling active fields in mantle dynamics simulations using a new solver implemented in the deal.II package that underlies the ASPECT software. The new solver for the advection-diffusion equation uses a Local Discontinuous Galerkin (LDG) algorithm, which combines features of both finite element and finite volume methods, and is particularly suitable for problems with a dominant first-order term and discontinuities. Furthermore, we have applied a post-processing technique to insure that the solution satisfies a global maximum/minimum. One potential drawback for the LDG method is that the total number of degrees of freedom is larger than the finite element method. To demonstrate the capabilities of this new method we present results for two benchmarks used previously: a falling cube with distinct buoyancy and viscosity, and a Rayleigh-Taylor instability of a compositionally buoyant layer. To evaluate the trade-offs in computational speed and solution accuracy we present results for these same benchmarks using the two field tracking methods available in ASPECT: active tracer particles and the entropy viscosity method.

  17. Evaluation Methodology between Globalization and Localization Features Approaches for Skin Cancer Lesions Classification

    NASA Astrophysics Data System (ADS)

    Ahmed, H. M.; Al-azawi, R. J.; Abdulhameed, A. A.

    2018-05-01

    Huge efforts have been put in the developing of diagnostic methods to skin cancer disease. In this paper, two different approaches have been addressed for detection the skin cancer in dermoscopy images. The first approach uses a global method that uses global features for classifying skin lesions, whereas the second approach uses a local method that uses local features for classifying skin lesions. The aim of this paper is selecting the best approach for skin lesion classification. The dataset has been used in this paper consist of 200 dermoscopy images from Pedro Hispano Hospital (PH2). The achieved results are; sensitivity about 96%, specificity about 100%, precision about 100%, and accuracy about 97% for globalization approach while, sensitivity about 100%, specificity about 100%, precision about 100%, and accuracy about 100% for Localization Approach, these results showed that the localization approach achieved acceptable accuracy and better than globalization approach for skin cancer lesions classification.

  18. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer

    PubMed Central

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy. PMID:29768463

  19. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer.

    PubMed

    Rani R, Hannah Jessie; Victoire T, Aruldoss Albert

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy.

  20. An Adaptive Scheme for Robot Localization and Mapping with Dynamically Configurable Inter-Beacon Range Measurements

    PubMed Central

    Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal

    2014-01-01

    This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938

  1. An adaptive scheme for robot localization and mapping with dynamically configurable inter-beacon range measurements.

    PubMed

    Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal

    2014-04-25

    This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.

  2. Correlative Light-Electron Microscopy of Lipid-Encapsulated Fluorescent Nanodiamonds for Nanometric Localization of Cell Surface Antigens.

    PubMed

    Hsieh, Feng-Jen; Chen, Yen-Wei; Huang, Yao-Kuan; Lee, Hsien-Ming; Lin, Chun-Hung; Chang, Huan-Cheng

    2018-02-06

    Containing an ensemble of nitrogen-vacancy centers in crystal matrices, fluorescent nanodiamonds (FNDs) are a new type of photostable markers that have found wide applications in light microscopy. The nanomaterial also has a dense carbon core, making it visible to electron microscopy. Here, we show that FNDs encapsulated in biotinylated lipids (bLs) are useful for subdiffraction imaging of antigens on cell surface with correlative light-electron microscopy (CLEM). The lipid encapsulation enables not only good dispersion of the particles in biological buffers but also high specific labeling of live cells. By employing the bL-encapsulated FNDs to target CD44 on HeLa cell surface through biotin-mediated immunostaining, we obtained the spatial distribution of these antigens by CLEM with a localization accuracy of ∼50 nm in routine operations. A comparative study with dual-color imaging, in which CD44 was labeled with FND and MICA/MICB was labeled with Alexa Fluor 488, demonstrated the superior performance of FNDs as fluorescent fiducial markers for CLEM of cell surface antigens.

  3. Simulation of enhanced deposition due to magnetic field alignment of ellipsoidal particles in a lung bifurcation.

    PubMed

    Martinez, R C; Roshchenko, A; Minev, P; Finlay, W H

    2013-02-01

    Aerosolized chemotherapy has been recognized as a potential treatment for lung cancer. The challenge of providing sufficient therapeutic effects without reaching dose-limiting toxicity levels hinders the development of aerosolized chemotherapy. This could be mitigated by increasing drug-delivery efficiency with a noninvasive drug-targeting delivery method. The purpose of this study is to use direct numerical simulations to study the resulting local enhancement of deposition due to magnetic field alignment of high aspect ratio particles. High aspect ratio particles were approximated by a rigid ellipsoid with a minor diameter of 0.5 μm and fluid particle density ratio of 1,000. Particle trajectories were calculated by solving the coupled fluid particle equations using an in-house micro-macro grid finite element algorithm based on a previously developed fictitious domain approach. Particle trajectories were simulated in a morphologically realistic geometry modeling a symmetrical terminal bronchiole bifurcation. Flow conditions were steady inspiratory air flow due to typical breathing at 18 L/min. Deposition efficiency was estimated for two different cases: [1] particles aligned with the streamlines and [2] particles with fixed angular orientation simulating the magnetic field alignment of our previous in vitro study. The local enhancement factor defined as the ratio between deposition efficiency of Case [1] and Case [2] was found to be 1.43 and 3.46 for particles with an aspect ratio of 6 and 20, respectively. Results indicate that externally forcing local alignment of high aspect ratio particles can increase local deposition considerably.

  4. Dynamics of the one-dimensional Anderson insulator coupled to various bosonic baths

    NASA Astrophysics Data System (ADS)

    Bonča, Janez; Trugman, Stuart A.; Mierzejewski, Marcin

    2018-05-01

    We study a particle which propagates in a one-dimensional strong random potential and is coupled to a bosonic bath. We independently test various properties of bosons (hopping term, hard-core effects, and generic boson-boson interaction) and show that bosonic itineracy is the essential ingredient governing the dynamics of the particle. Coupling of the particle to itinerant phonons or hard-core bosons alike leads to delocalization of the particle by virtue of a subdiffusive (or diffusive) spread from the initially localized state. Delocalization remains in effect even when the boson frequency and the bandwidth of itinerant bosons remain an order of magnitude smaller than the magnitude of the random potential. When the particle is coupled to localized bosons, its spread remains logarithmic or even sublogarithmic. The latter result together with the survival probability shows that the particle remains localized despite being coupled to bosons.

  5. Determination of localization accuracy based on experimentally acquired image sets: applications to single molecule microscopy

    PubMed Central

    Tahmasbi, Amir; Ward, E. Sally; Ober, Raimund J.

    2015-01-01

    Fluorescence microscopy is a photon-limited imaging modality that allows the study of subcellular objects and processes with high specificity. The best possible accuracy (standard deviation) with which an object of interest can be localized when imaged using a fluorescence microscope is typically calculated using the Cramér-Rao lower bound, that is, the inverse of the Fisher information. However, the current approach for the calculation of the best possible localization accuracy relies on an analytical expression for the image of the object. This can pose practical challenges since it is often difficult to find appropriate analytical models for the images of general objects. In this study, we instead develop an approach that directly uses an experimentally collected image set to calculate the best possible localization accuracy for a general subcellular object. In this approach, we fit splines, i.e. smoothly connected piecewise polynomials, to the experimentally collected image set to provide a continuous model of the object, which can then be used for the calculation of the best possible localization accuracy. Due to its practical importance, we investigate in detail the application of the proposed approach in single molecule fluorescence microscopy. In this case, the object of interest is a point source and, therefore, the acquired image set pertains to an experimental point spread function. PMID:25837101

  6. Investigations of interference between electromagnetic transponders and wireless MOSFET dosimeters: a phantom study.

    PubMed

    Su, Zhong; Zhang, Lisha; Ramakrishnan, V; Hagan, Michael; Anscher, Mitchell

    2011-05-01

    To evaluate both the Calypso Systems' (Calypso Medical Technologies, Inc., Seattle, WA) localization accuracy in the presence of wireless metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters of dose verification system (DVS, Sicel Technologies, Inc., Morrisville, NC) and the dosimeters' reading accuracy in the presence of wireless electromagnetic transponders inside a phantom. A custom-made, solid-water phantom was fabricated with space for transponders and dosimeters. Two inserts were machined with positioning grooves precisely matching the dimensions of the transponders and dosimeters and were arranged in orthogonal and parallel orientations, respectively. To test the transponder localization accuracy with/without presence of dosimeters (hypothesis 1), multivariate analyses were performed on transponder-derived localization data with and without dosimeters at each preset distance to detect statistically significant localization differences between the control and test sets. To test dosimeter dose-reading accuracy with/without presence of transponders (hypothesis 2), an approach of alternating the transponder presence in seven identical fraction dose (100 cGy) deliveries and measurements was implemented. Two-way analysis of variance was performed to examine statistically significant dose-reading differences between the two groups and the different fractions. A relative-dose analysis method was also used to evaluate transponder impact on dose-reading accuracy after dose-fading effect was removed by a second-order polynomial fit. Multivariate analysis indicated that hypothesis 1 was false; there was a statistically significant difference between the localization data from the control and test sets. However, the upper and lower bounds of the 95% confidence intervals of the localized positional differences between the control and test sets were less than 0.1 mm, which was significantly smaller than the minimum clinical localization resolution of 0.5 mm. For hypothesis 2, analysis of variance indicated that there was no statistically significant difference between the dosimeter readings with and without the presence of transponders. Both orthogonal and parallel configurations had difference of polynomial-fit dose to measured dose values within 1.75%. The phantom study indicated that the Calypso System's localization accuracy was not affected clinically due to the presence of DVS wireless MOSFET dosimeters and the dosimeter-measured doses were not affected by the presence of transponders. Thus, the same patients could be implanted with both transponders and dosimeters to benefit from improved accuracy of radiotherapy treatments offered by conjunctional use of the two systems.

  7. Localization of insulinomas to regions of the pancreas by intraarterial calcium stimulation: the NIH experience.

    PubMed

    Guettier, Jean-Marc; Kam, Anthony; Chang, Richard; Skarulis, Monica C; Cochran, Craig; Alexander, H Richard; Libutti, Steven K; Pingpank, James F; Gorden, Phillip

    2009-04-01

    Selective intraarterial calcium injection of the major pancreatic arteries with hepatic venous sampling [calcium arterial stimulation (CaStim)] has been used as a localizing tool for insulinomas at the National Institutes of Health (NIH) since 1989. The accuracy of this technique for localizing insulinomas was reported for all cases until 1996. The aim of the study was to assess the accuracy and track record of the CaStim over time and in the context of evolving technology and to review issues related to result interpretation and procedure complications. CaStim was the only invasive preoperative localization modality used at our center. Endoscopic ultrasound (US) was not studied. We conducted a retrospective case review at a referral center. Twenty-nine women and 16 men (mean age, 47 yr; range, 13-78) were diagnosed with an insulinoma from 1996-2008. A supervised fast was conducted to confirm the diagnosis of insulinoma. US, computed tomography (CT), magnetic resonance imaging (MRI), and CaStim were used as preoperative localization studies. Localization predicted by each preoperative test was compared to surgical localization for accuracy. We measured the accuracy of US, CT, MRI, and CaStim for localization of insulinomas preoperatively. All 45 patients had surgically proven insulinomas. Thirty-eight of 45 (84%) localized to the correct anatomical region by CaStim. In five of 45 (11%) patients, the CaStim was falsely negative. Two of 45 (4%) had false-positive localizations. The CaStim has remained vastly superior to abdominal US, CT, or MRI over time as a preoperative localizing tool for insulinomas. The utility of the CaStim for this purpose and in this setting is thus validated.

  8. The effect of using genealogy-based haplotypes for genomic prediction

    PubMed Central

    2013-01-01

    Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Conclusions Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy. PMID:23496971

  9. The effect of using genealogy-based haplotypes for genomic prediction.

    PubMed

    Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt

    2013-03-06

    Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.

  10. Technical Note: Evaluation of the systematic accuracy of a frameless, multiple image modality guided, linear accelerator based stereotactic radiosurgery system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N., E-mail: nwen1@hfhs.org; Snyder, K. C.; Qin, Y.

    2016-05-15

    Purpose: To evaluate the total systematic accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy and intermodality difference was determined by delivering radiation to an end-to-end prototype phantom, in which the targets were localized using optical surface monitoring system (OSMS), electromagnetic beacon-based tracking (Calypso®), cone-beam CT, “snap-shot” planar x-ray imaging, and a robotic couch. Six IMRT plans with jaw tracking and a flattening filter free beam were used to study the dosimetric accuracy for intracranial and spinal stereotactic radiosurgery treatment. Results: End-to-end localization accuracy of the system evaluated with the end-to-end phantom was 0.5 ± 0.2more » mm with a maximum deviation of 0.9 mm over 90 measurements (including jaw, MLC, and cone measurements for both auto and manual fusion) for single isocenter, single target treatment, 0.6 ± 0.4 mm for multitarget treatment with shared isocenter. Residual setup errors were within 0.1 mm for OSMS, and 0.3 mm for Calypso. Dosimetric evaluation based on absolute film dosimetry showed greater than 90% pass rate for all cases using a gamma criteria of 3%/1 mm. Conclusions: The authors’ experience demonstrates that the localization accuracy of the frameless image-guided system is comparable to robotic or invasive frame based radiosurgery systems.« less

  11. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  12. Localization accuracy of sphere fiducials in computed tomography images

    NASA Astrophysics Data System (ADS)

    Kobler, Jan-Philipp; Díaz Díaz, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias

    2014-03-01

    In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.

  13. Anderson localization and Mott insulator phase in the time domain

    PubMed Central

    Sacha, Krzysztof

    2015-01-01

    Particles in space periodic potentials constitute standard models for investigation of crystalline phenomena in solid state physics. Time periodicity of periodically driven systems is a close analogue of space periodicity of solid state crystals. There is an intriguing question if solid state phenomena can be observed in the time domain. Here we show that wave-packets localized on resonant classical trajectories of periodically driven systems are ideal elements to realize Anderson localization or Mott insulator phase in the time domain. Uniform superpositions of the wave-packets form stationary states of a periodically driven particle. However, an additional perturbation that fluctuates in time results in disorder in time and Anderson localization effects emerge. Switching to many-particle systems we observe that depending on how strong particle interactions are, stationary states can be Bose-Einstein condensates or single Fock states where definite numbers of particles occupy the periodically evolving wave-packets. Our study shows that non-trivial crystal-like phenomena can be observed in the time domain. PMID:26074169

  14. An extended Kalman filter approach to non-stationary Bayesian estimation of reduced-order vocal fold model parameters.

    PubMed

    Hadwin, Paul J; Peterson, Sean D

    2017-04-01

    The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.

  15. Colloids exposed to random potential energy landscapes: From particle number density to particle-potential and particle-particle interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bewerunge, Jörg; Capellmann, Ronja F.; Platten, Florian

    2016-07-28

    Colloidal particles were exposed to a random potential energy landscape that has been created optically via a speckle pattern. The mean particle density as well as the potential roughness, i.e., the disorder strength, were varied. The local probability density of the particles as well as its main characteristics were determined. For the first time, the disorder-averaged pair density correlation function g{sup (1)}(r) and an analogue of the Edwards-Anderson order parameter g{sup (2)}(r), which quantifies the correlation of the mean local density among disorder realisations, were measured experimentally and shown to be consistent with replica liquid state theory results.

  16. Acoustic localization at large scales: a promising method for grey wolf monitoring.

    PubMed

    Papin, Morgane; Pichenot, Julian; Guérold, François; Germain, Estelle

    2018-01-01

    The grey wolf ( Canis lupus ) is naturally recolonizing its former habitats in Europe where it was extirpated during the previous two centuries. The management of this protected species is often controversial and its monitoring is a challenge for conservation purposes. However, this elusive carnivore can disperse over long distances in various natural contexts, making its monitoring difficult. Moreover, methods used for collecting signs of presence are usually time-consuming and/or costly. Currently, new acoustic recording tools are contributing to the development of passive acoustic methods as alternative approaches for detecting, monitoring, or identifying species that produce sounds in nature, such as the grey wolf. In the present study, we conducted field experiments to investigate the possibility of using a low-density microphone array to localize wolves at a large scale in two contrasting natural environments in north-eastern France. For scientific and social reasons, the experiments were based on a synthetic sound with similar acoustic properties to howls. This sound was broadcast at several sites. Then, localization estimates and the accuracy were calculated. Finally, linear mixed-effects models were used to identify the factors that influenced the localization accuracy. Among 354 nocturnal broadcasts in total, 269 were recorded by at least one autonomous recorder, thereby demonstrating the potential of this tool. Besides, 59 broadcasts were recorded by at least four microphones and used for acoustic localization. The broadcast sites were localized with an overall mean accuracy of 315 ± 617 (standard deviation) m. After setting a threshold for the temporal error value associated with the estimated coordinates, some unreliable values were excluded and the mean accuracy decreased to 167 ± 308 m. The number of broadcasts recorded was higher in the lowland environment, but the localization accuracy was similar in both environments, although it varied significantly among different nights in each study area. Our results confirm the potential of using acoustic methods to localize wolves with high accuracy, in different natural environments and at large spatial scales. Passive acoustic methods are suitable for monitoring the dynamics of grey wolf recolonization and so, will contribute to enhance conservation and management plans.

  17. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  18. Accurate Quasiparticle Spectra from the T-Matrix Self-Energy and the Particle-Particle Random Phase Approximation.

    PubMed

    Zhang, Du; Su, Neil Qiang; Yang, Weitao

    2017-07-20

    The GW self-energy, especially G 0 W 0 based on the particle-hole random phase approximation (phRPA), is widely used to study quasiparticle (QP) energies. Motivated by the desirable features of the particle-particle (pp) RPA compared to the conventional phRPA, we explore the pp counterpart of GW, that is, the T-matrix self-energy, formulated with the eigenvectors and eigenvalues of the ppRPA matrix. We demonstrate the accuracy of the T-matrix method for molecular QP energies, highlighting the importance of the pp channel for calculating QP spectra.

  19. Photographic techniques for characterizing streambed particle sizes

    USGS Publications Warehouse

    Whitman, Matthew S.; Moran, Edward H.; Ourso, Robert T.

    2003-01-01

    We developed photographic techniques to characterize coarse (>2-mm) and fine (≤2-mm) streambed particle sizes in 12 streams in Anchorage, Alaska. Results were compared with current sampling techniques to assess which provided greater sampling efficiency and accuracy. The streams sampled were wadeable and contained gravel—cobble streambeds. Gradients ranged from about 5% at the upstream sites to about 0.25% at the downstream sites. Mean particle sizes and size-frequency distributions resulting from digitized photographs differed significantly from those resulting from Wolman pebble counts for five sites in the analysis. Wolman counts were biased toward selecting larger particles. Photographic analysis also yielded a greater number of measured particles (mean = 989) than did the Wolman counts (mean = 328). Stream embeddedness ratings assigned from field and photographic observations were significantly different at 5 of the 12 sites, although both types of ratings showed a positive relationship with digitized surface fines. Visual estimates of embeddedness and digitized surface fines may both be useful indicators of benthic conditions, but digitizing surface fines produces quantitative rather than qualitative data. Benefits of the photographic techniques include reduced field time, minimal streambed disturbance, convenience of postfield processing, easy sample archiving, and improved accuracy and replication potential.

  20. Design of a device for simultaneous particle size and electrostatic charge measurement of inhalation drugs.

    PubMed

    Zhu, Kewu; Ng, Wai Kiong; Shen, Shoucang; Tan, Reginald B H; Heng, Paul W S

    2008-11-01

    To develop a device for simultaneous measurement of particle aerodynamic diameter and electrostatic charge of inhalation aerosols. An integrated system consisting of an add-on charge measurement device and a liquid impinger was developed to simultaneously determine particle aerodynamic diameter and electrostatic charge. The accuracy in charge measurement and fine particle fraction characterization of the new system was evaluated. The integrated system was then applied to analyze the electrostatic charges of a DPI formulation composed of salbutamol sulphate-Inhalac 230 dispersed using a Rotahaler. The charge measurement accuracy was comparable with the Faraday cage method, and incorporation of the charge measurement module had no effect on the performance of the liquid impinger. Salbutamol sulphate carried negative charges while the net charge of Inhalac 230 and un-dispersed salbutamol sulphate was found to be positive after being aerosolized from the inhaler. The instantaneous current signal was strong with small noise to signal ratio, and good reproducibility of charge to mass ratio was obtained for the DPI system investigated. A system for simultaneously measuring particle aerodynamic diameter and aerosol electrostatic charges has been developed, and the system provides a non-intrusive and reliable electrostatic charge characterization method for inhalation dosage forms.

  1. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  2. Local lubrication model for spherical particles within incompressible Navier-Stokes flows.

    PubMed

    Lambert, B; Weynans, L; Bergmann, M

    2018-03-01

    The lubrication forces are short-range hydrodynamic interactions essential to describe suspension of the particles. Usually, they are underestimated in direct numerical simulations of particle-laden flows. In this paper, we propose a lubrication model for a coupled volume penalization method and discrete element method solver that estimates the unresolved hydrodynamic forces and torques in an incompressible Navier-Stokes flow. Corrections are made locally on the surface of the interacting particles without any assumption on the global particle shape. The numerical model has been validated against experimental data and performs as well as existing numerical models that are limited to spherical particles.

  3. Time-Resolved Particle Image Velocimetry Measurements with Wall Shear Stress and Uncertainty Quantification for the FDA Nozzle Model.

    PubMed

    Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P

    2016-03-01

    We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over the previous work through increased PIV image resolution, use of robust image processing algorithms for near-wall velocity measurements and wall shear stress calculations, and uncertainty analyses for both velocity and wall shear stress measurements. The velocity and shear stress analysis, with spatially distributed uncertainty estimates, highlights the challenges of flow quantification in medical devices and provides potential methods to overcome such challenges.

  4. Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leimkuhler, Benedict, E-mail: b.leimkuhler@ed.ac.uk; Shang, Xiaocheng, E-mail: x.shang@brown.edu

    2016-11-01

    We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé–Hoover–Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for anmore » important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees–Edwards boundary conditions to induce shear flow.« less

  5. Underwater sonar image detection: A combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm.

    PubMed

    Wang, Xingmei; Liu, Shu; Liu, Zhipeng

    2017-01-01

    This paper proposes a combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm to detect underwater objects in sonar images. Specifically, for the first time, the problem of inappropriate filtering degree parameter which commonly occurs in non-local spatial information and seriously affects the denoising performance in sonar images, was solved with the method utilizing a novel filtering degree parameter. Then, a quantum-inspired shuffled frog leaping algorithm based on new search mechanism (QSFLA-NSM) is proposed to precisely and quickly detect sonar images. Each frog individual is directly encoded by real numbers, which can greatly simplify the evolution process of the quantum-inspired shuffled frog leaping algorithm (QSFLA). Meanwhile, a fitness function combining intra-class difference with inter-class difference is adopted to evaluate frog positions more accurately. On this basis, recurring to an analysis of the quantum-behaved particle swarm optimization (QPSO) and the shuffled frog leaping algorithm (SFLA), a new search mechanism is developed to improve the searching ability and detection accuracy. At the same time, the time complexity is further reduced. Finally, the results of comparative experiments using the original sonar images, the UCI data sets and the benchmark functions demonstrate the effectiveness and adaptability of the proposed method.

  6. Underwater sonar image detection: A combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm

    PubMed Central

    Liu, Zhipeng

    2017-01-01

    This paper proposes a combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm to detect underwater objects in sonar images. Specifically, for the first time, the problem of inappropriate filtering degree parameter which commonly occurs in non-local spatial information and seriously affects the denoising performance in sonar images, was solved with the method utilizing a novel filtering degree parameter. Then, a quantum-inspired shuffled frog leaping algorithm based on new search mechanism (QSFLA-NSM) is proposed to precisely and quickly detect sonar images. Each frog individual is directly encoded by real numbers, which can greatly simplify the evolution process of the quantum-inspired shuffled frog leaping algorithm (QSFLA). Meanwhile, a fitness function combining intra-class difference with inter-class difference is adopted to evaluate frog positions more accurately. On this basis, recurring to an analysis of the quantum-behaved particle swarm optimization (QPSO) and the shuffled frog leaping algorithm (SFLA), a new search mechanism is developed to improve the searching ability and detection accuracy. At the same time, the time complexity is further reduced. Finally, the results of comparative experiments using the original sonar images, the UCI data sets and the benchmark functions demonstrate the effectiveness and adaptability of the proposed method. PMID:28542266

  7. Distribution of Particles, Small Molecules and Polymeric Formulation Excipients in the Suprachoroidal Space after Microneedle Injection

    PubMed Central

    Chiang, Bryce; Venugopal, Nitin; Edelhauser, Henry F.; Prausnitz, Mark R.

    2016-01-01

    The purpose of this work was to determine the effect of injection volume, formulation composition, and time on circumferential spread of particles, small molecules and polymeric formulation excipients in the suprachoroidal space (SCS) after microneedle injection into New Zealand White rabbit eyes ex vivo and in vivo. Microneedle injections of 25–150 μL Hank’s Balanced Salt Solution (HBSS) containing 0.2 μm red-fluorescent particles and a model small molecule (fluorescein) were performed in rabbit eyes ex vivo, and visualized via flat mount. Particles with diameters of 0.02 – 2 μm were co-injected into SCS in vivo with fluorescein or a polymeric formulation excipient: fluorescein isothiocyanate (FITC)-labeled Discovisc or FITC-labeled carboxymethyl cellulose (CMC). Fluorescent fundus images were acquired over time to determine area of particle, fluorescein and polymeric formulation excipient spread, as well as their co-localization. We found that fluorescein covered a significantly larger area than co-injected particles when suspended in HBSS, and that this difference was present from 3 min post-injection onwards. We further showed that there was no difference in initial area covered by FITC-Discovisc and particles; the transport time (i.e., the time until the FITC-Discovisc and particle area began dissociating) was 2 d. There was also no difference in initial area covered by FITC-CMC and particles; the transport time in FITC-CMC was 4 d. We also found that particle size (20 nm – 2 μm) had no effect on spreading area when delivered in HBSS or Discovisc. We conclude that (i) the area of particle spread in SCS during injection generally increased with increasing injection volume, was unaffected by particle size and was significantly less than the area of fluorescein spread, (ii) particles suspended in low-viscosity HBSS formulation were entrapped in the SCS after injection, whereas fluorescein was not and (iii) particles co-injected with viscous polymeric formulation excipients co-localized near the site of injection in the SCS, continued to co-localize while spreading over larger areas for 2 – 4 days, and then no longer co-localized as the polymeric formulation excipients were cleared within 1 – 3 weeks and the particles remained largely in place. These data suggest that particles encounter greater barriers to flow in SCS compared to molecules and that co-localization of particles and polymeric formulation excipients allow spreading over larger areas of the SCS until the particles and excipients dissociate. PMID:27742547

  8. Distribution of particles, small molecules and polymeric formulation excipients in the suprachoroidal space after microneedle injection.

    PubMed

    Chiang, Bryce; Venugopal, Nitin; Edelhauser, Henry F; Prausnitz, Mark R

    2016-12-01

    The purpose of this work was to determine the effect of injection volume, formulation composition, and time on circumferential spread of particles, small molecules, and polymeric formulation excipients in the suprachoroidal space (SCS) after microneedle injection into New Zealand White rabbit eyes ex vivo and in vivo. Microneedle injections of 25-150 μL Hank's Balanced Salt Solution (HBSS) containing 0.2 μm red-fluorescent particles and a model small molecule (fluorescein) were performed in rabbit eyes ex vivo, and visualized via flat mount. Particles with diameters of 0.02-2 μm were co-injected into SCS in vivo with fluorescein or a polymeric formulation excipient: fluorescein isothiocyanate (FITC)-labeled Discovisc or FITC-labeled carboxymethyl cellulose (CMC). Fluorescent fundus images were acquired over time to determine area of particle, fluorescein, and polymeric formulation excipient spread, as well as their co-localization. We found that fluorescein covered a significantly larger area than co-injected particles when suspended in HBSS, and that this difference was present from 3 min post-injection onwards. We further showed that there was no difference in initial area covered by FITC-Discovisc and particles; the transport time (i.e., the time until the FITC-Discovisc and particle area began dissociating) was 2 d. There was also no difference in initial area covered by FITC-CMC and particles; the transport time in FITC-CMC was 4 d. We also found that particle size (20 nm-2 μm) had no effect on spreading area when delivered in HBSS or Discovisc. We conclude that (i) the area of particle spread in SCS during injection generally increased with increasing injection volume, was unaffected by particle size, and was significantly less than the area of fluorescein spread, (ii) particles suspended in low-viscosity HBSS formulation were entrapped in the SCS after injection, whereas fluorescein was not and (iii) particles co-injected with viscous polymeric formulation excipients co-localized near the site of injection in the SCS, continued to co-localize while spreading over larger areas for 2-4 days, and then no longer co-localized as the polymeric formulation excipients were cleared within 1-3 weeks and the particles remained largely in place. These data suggest that particles encounter greater barriers to flow in SCS compared to molecules and that co-localization of particles and polymeric formulation excipients allows spreading over larger areas of the SCS until the particles and excipients dissociate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Hybrid particle-continuum simulations coupling Brownian dynamics and local dynamic density functional theory.

    PubMed

    Qi, Shuanhu; Schmid, Friederike

    2017-11-08

    We present a multiscale hybrid particle-field scheme for the simulation of relaxation and diffusion behavior of soft condensed matter systems. It combines particle-based Brownian dynamics and field-based local dynamics in an adaptive sense such that particles can switch their level of resolution on the fly. The switching of resolution is controlled by a tuning function which can be chosen at will according to the geometry of the system. As an application, the hybrid scheme is used to study the kinetics of interfacial broadening of a polymer blend, and is validated by comparing the results to the predictions from pure Brownian dynamics and pure local dynamics calculations.

  10. Scaling theory of tunneling diffusion of a heavy particle interacting with phonons

    NASA Astrophysics Data System (ADS)

    Itai, K.

    1988-05-01

    The author discusses motion of a heavy particle in a d-dimensional lattice interacting with phonons by different couplings. The models discussed are characterized by the dimension (d) and the set of two indices (λ,ν) which specify the momentum dependence of the dispersion of phonon energy (ω~kν) and of the particle-phonon coupling (~kλ). Scaling equations are derived by eliminating the short-time behavior in a renormalization-group scheme using Feynman's path-integral method, and the technique developed by Anderson, Yuval, and Hamann for the Kondo problem. The scaling equations show that the particle is localized in the strict sense when (2λ+d+2)/ν<2 and is not localized when (2λ+d+2)/ν>2. In the marginal case, i.e., (2λ+d+2)/ν=2, localization occurs for couplings larger than a critical value. This marginal case shows Ohmic dissipation and is a close analogy to the Caldeira-Leggett model for macroscopic quantum tunneling and the hopping models of Schmid's type. For large-enough (2λ+d+2)/ν, the particle is considered practically localized, but the origin of the localization is quite different from that for (2λ+d+2)/ν<=2. .AE

  11. Trust index based fault tolerant multiple event localization algorithm for WSNs.

    PubMed

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.

  12. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    PubMed Central

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972

  13. Developing a denoising filter for electron microscopy and tomography data in the cloud.

    PubMed

    Starosolski, Zbigniew; Szczepanski, Marek; Wahle, Manuel; Rusu, Mirabela; Wriggers, Willy

    2012-09-01

    The low radiation conditions and the predominantly phase-object image formation of cryo-electron microscopy (cryo-EM) result in extremely high noise levels and low contrast in the recorded micrographs. The process of single particle or tomographic 3D reconstruction does not completely eliminate this noise and is even capable of introducing new sources of noise during alignment or when correcting for instrument parameters. The recently developed Digital Paths Supervised Variance (DPSV) denoising filter uses local variance information to control regional noise in a robust and adaptive manner. The performance of the DPSV filter was evaluated in this review qualitatively and quantitatively using simulated and experimental data from cryo-EM and tomography in two and three dimensions. We also assessed the benefit of filtering experimental reconstructions for visualization purposes and for enhancing the accuracy of feature detection. The DPSV filter eliminates high-frequency noise artifacts (density gaps), which would normally preclude the accurate segmentation of tomography reconstructions or the detection of alpha-helices in single-particle reconstructions. This collaborative software development project was carried out entirely by virtual interactions among the authors using publicly available development and file sharing tools.

  14. Gctf: Real-time CTF determination and correction

    PubMed Central

    Zhang, Kai

    2016-01-01

    Accurate estimation of the contrast transfer function (CTF) is critical for a near-atomic resolution cryo electron microscopy (cryoEM) reconstruction. Here, a GPU-accelerated computer program, Gctf, for accurate and robust, real-time CTF determination is presented. The main target of Gctf is to maximize the cross-correlation of a simulated CTF with the logarithmic amplitude spectra (LAS) of observed micrographs after background subtraction. Novel approaches in Gctf improve both speed and accuracy. In addition to GPU acceleration (e.g. 10–50×), a fast ‘1-dimensional search plus 2-dimensional refinement (1S2R)’ procedure further speeds up Gctf. Based on the global CTF determination, the local defocus for each particle and for single frames of movies is accurately refined, which improves CTF parameters of all particles for subsequent image processing. Novel diagnosis method using equiphase averaging (EPA) and self-consistency verification procedures have also been implemented in the program for practical use, especially for aims of near-atomic reconstruction. Gctf is an independent program and the outputs can be easily imported into other cryoEM software such as Relion (Scheres, 2012) and Frealign (Grigorieff, 2007). The results from several representative datasets are shown and discussed in this paper. PMID:26592709

  15. Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations

    NASA Astrophysics Data System (ADS)

    Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.

    2016-07-01

    Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.

  16. Adaptive Kalman filter for indoor localization using Bluetooth Low Energy and inertial measurement unit.

    PubMed

    Yoon, Paul K; Zihajehzadeh, Shaghayegh; Bong-Soo Kang; Park, Edward J

    2015-08-01

    This paper proposes a novel indoor localization method using the Bluetooth Low Energy (BLE) and an inertial measurement unit (IMU). The multipath and non-line-of-sight errors from low-power wireless localization systems commonly result in outliers, affecting the positioning accuracy. We address this problem by adaptively weighting the estimates from the IMU and BLE in our proposed cascaded Kalman filter (KF). The positioning accuracy is further improved with the Rauch-Tung-Striebel smoother. The performance of the proposed algorithm is compared against that of the standard KF experimentally. The results show that the proposed algorithm can maintain high accuracy for position tracking the sensor in the presence of the outliers.

  17. Relation of sound intensity and accuracy of localization.

    PubMed

    Farrimond, T

    1989-08-01

    Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.

  18. Local morphologic scale: application to segmenting tumor infiltrating lymphocytes in ovarian cancer TMAs

    NASA Astrophysics Data System (ADS)

    Janowczyk, Andrew; Chandran, Sharat; Feldman, Michael; Madabhushi, Anant

    2011-03-01

    In this paper we present the concept and associated methodological framework for a novel locally adaptive scale notion called local morphological scale (LMS). Broadly speaking, the LMS at every spatial location is defined as the set of spatial locations, with associated morphological descriptors, which characterize the local structure or heterogeneity for the location under consideration. More specifically, the LMS is obtained as the union of all pixels in the polygon obtained by linking the final location of trajectories of particles emanating from the location under consideration, where the path traveled by originating particles is a function of the local gradients and heterogeneity that they encounter along the way. As these particles proceed on their trajectory away from the location under consideration, the velocity of each particle (i.e. do the particles stop, slow down, or simply continue around the object) is modeled using a physics based system. At some time point the particle velocity goes to zero (potentially on account of encountering (a) repeated obstructions, (b) an insurmountable image gradient, or (c) timing out) and comes to a halt. By using a Monte-Carlo sampling technique, LMS is efficiently determined through parallelized computations. LMS is different from previous local scale related formulations in that it is (a) not a locally connected sets of pixels satisfying some pre-defined intensity homogeneity criterion (generalized-scale), nor is it (b) constrained by any prior shape criterion (ball-scale, tensor-scale). Shape descriptors quantifying the morphology of the particle paths are used to define a tensor LMS signature associated with every spatial image location. These features include the number of object collisions per particle, average velocity of a particle, and the length of the individual particle paths. These features can be used in conjunction with a supervised classifier to correctly differentiate between two different object classes based on local structural properties. In this paper, we apply LMS to the specific problem of classifying regions of interest in Ovarian Cancer (OCa) histology images as either tumor or stroma. This approach is used to classify lymphocytes as either tumor infiltrating lymphocytes (TILs) or non-TILs; the presence of TILs having been identified as an important prognostic indicator for disease outcome in patients with OCa. We present preliminary results on the tumor/stroma classification of 11,000 randomly selected locations of interest, across 11 images obtained from 6 patient studies. Using a Probabilistic Boosting Tree (PBT), our supervised classifier yielded an area under the receiver operation characteristic curve (AUC) of 0.8341 +/-0.0059 over 5 runs of randomized cross validation. The average LMS computation time at every spatial location for an image patch comprising 2000 pixels with 24 particles at every location was only 18s.

  19. [An Extraction and Recognition Method of the Distributed Optical Fiber Vibration Signal Based on EMD-AWPP and HOSA-SVM Algorithm].

    PubMed

    Zhang, Yanjun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong

    2016-02-01

    Given that the traditional signal processing methods can not effectively distinguish the different vibration intrusion signal, a feature extraction and recognition method of the vibration information is proposed based on EMD-AWPP and HOSA-SVM, using for high precision signal recognition of distributed fiber optic intrusion detection system. When dealing with different types of vibration, the method firstly utilizes the adaptive wavelet processing algorithm based on empirical mode decomposition effect to reduce the abnormal value influence of sensing signal and improve the accuracy of signal feature extraction. Not only the low frequency part of the signal is decomposed, but also the high frequency part the details of the signal disposed better by time-frequency localization process. Secondly, it uses the bispectrum and bicoherence spectrum to accurately extract the feature vector which contains different types of intrusion vibration. Finally, based on the BPNN reference model, the recognition parameters of SVM after the implementation of the particle swarm optimization can distinguish signals of different intrusion vibration, which endows the identification model stronger adaptive and self-learning ability. It overcomes the shortcomings, such as easy to fall into local optimum. The simulation experiment results showed that this new method can effectively extract the feature vector of sensing information, eliminate the influence of random noise and reduce the effects of outliers for different types of invasion source. The predicted category identifies with the output category and the accurate rate of vibration identification can reach above 95%. So it is better than BPNN recognition algorithm and improves the accuracy of the information analysis effectively.

  20. Interacting Bosons in a Double-Well Potential: Localization Regime

    NASA Astrophysics Data System (ADS)

    Rougerie, Nicolas; Spehner, Dominique

    2018-06-01

    We study the ground state of a large bosonic system trapped in a symmetric double-well potential, letting the distance between the two wells increase to infinity with the number of particles. In this context, one should expect an interaction-driven transition between a delocalized state (particles are independent and all live in both wells) and a localized state (particles are correlated, half of them live in each well). We start from the full many-body Schrödinger Hamiltonian in a large-filling situation where the on-site interaction and kinetic energies are comparable. When tunneling is negligible against interaction energy, we prove a localization estimate showing that the particle number fluctuations in each well are strongly suppressed. The modes in which the particles condense are minimizers of nonlinear Schrödinger-type functionals.

  1. Comparison of deterministic and stochastic methods for time-dependent Wigner simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Sihong, E-mail: sihong@math.pku.edu.cn; Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg

    2015-11-01

    Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution ofmore » a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.« less

  2. Efficient Schmidt number scaling in dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Krafnick, Ryan C.; García, Angel E.

    2015-12-01

    Dissipative particle dynamics is a widely used mesoscale technique for the simulation of hydrodynamics (as well as immersed particles) utilizing coarse-grained molecular dynamics. While the method is capable of describing any fluid, the typical choice of the friction coefficient γ and dissipative force cutoff rc yields an unacceptably low Schmidt number Sc for the simulation of liquid water at standard temperature and pressure. There are a variety of ways to raise Sc, such as increasing γ and rc, but the relative cost of modifying each parameter (and the concomitant impact on numerical accuracy) has heretofore remained undetermined. We perform a detailed search over the parameter space, identifying the optimal strategy for the efficient and accuracy-preserving scaling of Sc, using both numerical simulations and theoretical predictions. The composite results recommend a parameter choice that leads to a speed improvement of a factor of three versus previously utilized strategies.

  3. Localization in quantum field theory

    NASA Astrophysics Data System (ADS)

    Balachandran, A. P.

    In non-relativistic quantum mechanics, Born’s principle of localization is as follows: For a single particle, if a wave function ψK vanishes outside a spatial region K, it is said to be localized in K. In particular, if a spatial region K‧ is disjoint from K, a wave function ψK‧ localized in K‧ is orthogonal to ψK. Such a principle of localization does not exist compatibly with relativity and causality in quantum field theory (QFT) (Newton and Wigner) or interacting point particles (Currie, Jordan and Sudarshan). It is replaced by symplectic localization of observables as shown by Brunetti, Guido and Longo, Schroer and others. This localization gives a simple derivation of the spin-statistics theorem and the Unruh effect, and shows how to construct quantum fields for anyons and for massless particles with “continuous” spin. This review outlines the basic principles underlying symplectic localization and shows or mentions its deep implications. In particular, it has the potential to affect relativistic quantum information theory and black hole physics.

  4. A review of tephra transport and dispersal models: Evolution, current status, and future perspectives

    NASA Astrophysics Data System (ADS)

    Folch, A.

    2012-08-01

    Tephra transport models try to predict atmospheric dispersion and sedimentation of tephra depending on meteorology, particle properties, and eruption characteristics, defined by eruption column height, mass eruption rate, and vertical distribution of mass. Models are used for different purposes, from operational forecast of volcanic ash clouds to hazard assessment of tephra dispersion and fallout. The size of the erupted particles, a key parameter controlling the dynamics of particle sedimentation in the atmosphere, varies within a wide range. Largest centimetric to millimetric particles fallout at proximal to medial distances from the volcano and sediment by gravitational settling. On the other extreme, smallest micrometric to sub-micrometric particles can be transported at continental or even at global scales and are affected by other deposition and aggregation mechanisms. Different scientific communities had traditionally modeled the dispersion of these two end members. Volcanologists developed families of models suitable for lapilli and coarse ash and aimed at computing fallout deposits and for hazard assessment. In contrast, meteorologists and atmospheric scientists have traditionally used other atmospheric transport models, dealing with finer particles, for tracking motion of volcanic ash clouds and, eventually, for computing airborne ash concentrations. During the last decade, the increasing demand for model accuracy and forecast reliability has pushed on two fronts. First, the original gap between these different families of models has been filled with the emergence of multi-scale and multi-purpose models. Second, new modeling strategies including, for example, ensemble and probabilistic forecast or model data assimilation are being investigated for future implementation in models and or modeling strategies. This paper reviews the evolution of tephra transport and dispersal models during the last two decades, presents the status and limitations of the current modeling strategies, and discusses some emergent perspectives expected to be implemented at operational level during the next few years. Improvements in both real-time forecasting and long-term hazard assessment are necessary to loss prevention programs on a local, regional, national and international level.

  5. Testing MODFLOW-LGR for simulating flow around buried Quaternary valleys - synthetic test cases

    NASA Astrophysics Data System (ADS)

    Vilhelmsen, T. N.; Christensen, S.

    2009-12-01

    In this study the Local Grid Refinement (LGR) method developed for MODFLOW-2005 (Mehl and Hill, 2005) is utilized to describe groundwater flow in areas containing buried Quaternary valley structures. The tests are conducted as comparative analysis between simulations run with a globally refined model, a locally refined model, and a globally coarse model, respectively. The models vary from simple one layer models to more complex ones with up to 25 model layers. The comparisons of accuracy are conducted within the locally refined area and focus on water budgets, simulated heads, and simulated particle traces. Simulations made with the globally refined model are used as reference (regarded as “true” values). As expected, for all test cases the application of local grid refinement resulted in more accurate results than when using the globally coarse model. A significant advantage of utilizing MODFLOW-LGR was that it allows increased numbers of model layers to better resolve complex geology within local areas. This resulted in more accurate simulations than when using either a globally coarse model grid or a locally refined model with lower geological resolution. Improved accuracy in the latter case could not be expected beforehand because difference in geological resolution between the coarse parent model and the refined child model contradicts the assumptions of the Darcy weighted interpolation used in MODFLOW-LGR. With respect to model runtimes, it was sometimes found that the runtime for the locally refined model is much longer than for the globally refined model. This was the case even when the closure criteria were relaxed compared to the globally refined model. These results are contradictory to those presented by Mehl and Hill (2005). Furthermore, in the complex cases it took some testing (model runs) to identify the closure criteria and the damping factor that secured convergence, accurate solutions, and reasonable runtimes. For our cases this is judged to be a serious disadvantage of applying MODFLOW-LGR. Another disadvantage in the studied cases was that the MODFLOW-LGR results proved to be somewhat dependent on the correction method used at the parent-child model interface. This indicates that when applying MODFLOW-LGR there is a need for thorough and case-specific considerations regarding choice of correction method. References: Mehl, S. and M. C. Hill (2005). "MODFLOW-2005, THE U.S. GEOLOGICAL SURVEY MODULAR GROUND-WATER MODEL - DOCUMENTATION OF SHARED NODE LOCAL GRID REFINEMENT (LGR) AND THE BOUNDARY FLOW AND HEAD (BFH) PACKAGE " U.S. Geological Survey Techniques and Methods 6-A12

  6. Alzheimer's disease can spare local metacognition despite global anosognosia: revisiting the confidence-accuracy relationship in episodic memory.

    PubMed

    Gallo, David A; Cramer, Stefanie J; Wong, Jessica T; Bennett, David A

    2012-07-01

    Alzheimer's disease (AD) can impair metacognition in addition to more basic cognitive functions like memory. However, while global metacognitive inaccuracies are well documented (i.e., low deficit awareness, or anosognosia), the evidence is mixed regarding the effects of AD on local or task-based metacognitive judgments. Here we investigated local metacognition with respect to the confidence-accuracy relationship in episodic memory (i.e., metamemory). AD and control participants studied pictures of common objects and their verbal labels, and then took forced-choice picture recollection tests using the verbal labels as retrieval cues. We found that item-based confidence judgments discriminated between accurate and inaccurate recollection responses in both groups, implicating relatively spared metamemory in AD. By contrast, there was evidence for global metacognitive deficiencies, as AD participants underestimated the severity of their everyday problems compared to an informant's assessment. Within the AD group, individual differences in global metacognition were related to recollection accuracy, and global metacognition for everyday memory problems was related to task-based metacognitive accuracy. These findings suggest that AD can spare the confidence-accuracy relationship in recollection tasks, and that global and local metacognition measures tap overlapping neuropsychological processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. The Accuracy of Perceptions of Education Finance Information: How Well Local Leaders Understand Local Communities

    ERIC Educational Resources Information Center

    De Luca, Barbara M.; Hinshaw, Steven A.; Ziswiler, Korrin

    2013-01-01

    The purpose for this research was to determine the accuracy of the perceptions of school administrators and community leaders regarding education finance information. School administrators and community leaders in this research project included members of three groups: public school administrators, other public school leaders, and leaders in the…

  8. The Improvement of Particle Swarm Optimization: a Case Study of Optimal Operation in Goupitan Reservoir

    NASA Astrophysics Data System (ADS)

    Li, Haichen; Qin, Tao; Wang, Weiping; Lei, Xiaohui; Wu, Wenhui

    2018-02-01

    Due to the weakness in holding diversity and reaching global optimum, the standard particle swarm optimization has not performed well in reservoir optimal operation. To solve this problem, this paper introduces downhill simplex method to work together with the standard particle swarm optimization. The application of this approach in Goupitan reservoir optimal operation proves that the improved method had better accuracy and higher reliability with small investment.

  9. Trilateration-based localization algorithm for ADS-B radar systems

    NASA Astrophysics Data System (ADS)

    Huang, Ming-Shih

    Rapidly increasing growth and demand in various unmanned aerial vehicles (UAV) have pushed governmental regulation development and numerous technology research advances toward integrating unmanned and manned aircraft into the same civil airspace. Safety of other airspace users is the primary concern; thus, with the introduction of UAV into the National Airspace System (NAS), a key issue to overcome is the risk of a collision with manned aircraft. The challenge of UAV integration is global. As automatic dependent surveillance-broadcast (ADS-B) system has gained wide acceptance, additional exploitations of the radioed satellite-based information are topics of current interest. One such opportunity includes the augmentation of the communication ADS-B signal with a random bi-phase modulation for concurrent use as a radar signal for detecting other aircraft in the vicinity. This dissertation provides detailed discussion about the ADS-B radar system, as well as the formulation and analysis of a suitable non-cooperative multi-target tracking method for the ADS-B radar system using radar ranging techniques and particle filter algorithms. In order to deal with specific challenges faced by the ADS-B radar system, several estimation algorithms are studied. Trilateration-based localization algorithms are proposed due to their easy implementation and their ability to work with coherent signal sources. The centroid of three most closely spaced intersections of constant-range loci is conventionally used as trilateration estimate without rigorous justification. In this dissertation, we address the quality of trilateration intersections through range scaling factors. A number of well-known triangle centers, including centroid, incenter, Lemoine point (LP), and Fermat point (FP), are discussed in detail. To the author's best knowledge, LP was never associated with trilateration techniques. According our study, LP is proposed as the best trilateration estimator thanks to the desirable property that the total distance to three triangle edges is minimized. It is demonstrated through simulation that LP outperforms centroid localization without additional computational load. In addition, severe trilateration scenarios such as two-intersection cases are considered in this dissertation, and enhanced trilateration algorithms are proposed. Particle filter (PF) is also discussed in this dissertation, and a simplified resampling mechanism is proposed. In addition, the low-update-rate measurement due to the ADS-B system specification is addressed in order to provide acceptable estimation results. Supplementary particle filter (SPF) is proposed to takes advantage of the waiting time before the next measurement is available and improves the estimation convergence rate and estimation accuracy. While PF suffers from sample impoverishment, especially when the number of particles is not sufficiently large, SPF allows the particles to redistribute to high likelihood areas over iterations using the same measurement information, thereby improving the estimation performance.

  10. Wear particles of single-crystal silicon carbide in vacuum

    NASA Technical Reports Server (NTRS)

    Miyoshi, K.; Buckley, D. H.

    1980-01-01

    Sliding friction experiments, conducted in vacuum with silicon carbide /000/ surface in contact with iron based binary alloys are described. Multiangular and spherical wear particles of silicon carbide are observed as a result of multipass sliding. The multiangular particles are produced by primary and secondary cracking of cleavage planes /000/, /10(-1)0/, and /11(-2)0/ under the Hertzian stress field or local inelastic deformation zone. The spherical particles may be produced by two mechanisms: (1) a penny shaped fracture along the circular stress trajectories under the local inelastic deformation zone, and (2) attrition of wear particles.

  11. Oscillating microbubbles for selective particle sorting in acoustic microfluidic devices

    NASA Astrophysics Data System (ADS)

    Rogers, Priscilla; Xu, Lin; Neild, Adrian

    2012-05-01

    In this study, acoustic waves were used to excite a microbubble for selective particle trapping and sorting. Excitation of the bubble at its volume resonance, as necessary to drive strong fluid microstreaming, resulted in the particles being either selectively attracted to the bubble or continuing to follow the local microstreamlines. The operating principle exploited two acoustic phenomena acting on the particle suspension: the drag force arising from the acoustic microstreaming and the secondary Bjerknes force, i.e. the attractive radiation force produced between an oscillating bubble and a non-buoyant particle. It was also found that standing wave fields within the fluid chamber could be used to globally align bubbles and particles for local particle sorting by the bubble.

  12. Effective stochastic generator with site-dependent interactions

    NASA Astrophysics Data System (ADS)

    Khamehchi, Masoumeh; Jafarpour, Farhad H.

    2017-11-01

    It is known that the stochastic generators of effective processes associated with the unconditioned dynamics of rare events might consist of non-local interactions; however, it can be shown that there are special cases for which these generators can include local interactions. In this paper, we investigate this possibility by considering systems of classical particles moving on a one-dimensional lattice with open boundaries. The particles might have hard-core interactions similar to the particles in an exclusion process, or there can be many arbitrary particles at a single site in a zero-range process. Assuming that the interactions in the original process are local and site-independent, we will show that under certain constraints on the microscopic reaction rules, the stochastic generator of an unconditioned process can be local but site-dependent. As two examples, the asymmetric zero-temperature Glauber model and the A-model with diffusion are presented and studied under the above-mentioned constraints.

  13. Design and development of a smart aerial platform for surface hydrological measurements

    NASA Astrophysics Data System (ADS)

    Tauro, F.; Pagano, C.; Porfiri, M.; Grimaldi, S.

    2013-12-01

    Currently available experimental methodologies for surface hydrological monitoring rely on the use of intrusive sensing technologies which tend to provide local rather than distributed information on the flow physics. In this context, drawbacks deriving from the use of invasive instrumentation are partially alleviated by Large Scale Particle Image Velocimetry (LSPIV). LSPIV is based on the use of cameras mounted on masts along river banks which capture images of artificial tracers or naturally occurring objects floating on water surfaces. Images are then georeferenced and the displacement of groups of floating tracers statistically analyzed to reconstruct flow velocity maps at specific river cross-sections. In this work, we mitigate LSPIV spatial limitations and inaccuracies due to image calibration by designing and developing a smart platform which integrates digital acquisition system and laser calibration units onboard of a custom-built quadricopter. The quadricopter is designed to be lightweight, low cost as compared to kits available on the market, highly customizable, and stable to guarantee minimal vibrations during image acquisition. The onboard digital system includes an encased GoPro Hero 3 camera whose axis is constantly kept orthogonal to the water surface by means of an in-house developed gimbal. The gimbal is connected to the quadricopter through a shock absorber damping device which further reduces eventual vibrations. Image calibration is performed through laser units mounted at known distances on the quadricopter landing apparatus. The vehicle can be remotely controlled by the open-source Ardupilot microcontroller. Calibration tests and field experiments are conducted in outdoor environments to assess the feasibility of using the smart platform for acquisition of high quality images of natural streams. Captured images are processed by LSPIV algorithms and average flow velocities are compared to independently acquired flow estimates. Further, videos are presented where the smart platform captures the motion of environmentally-friendly buoyant fluorescent particle tracers floating on the surface of water bodies. Such fluorescent particles are in-house synthesized and their visibility and accuracy in tracing complex flows have been previously tested in laboratory and outdoor settings. Experimental results demonstrate the potential of the methodology in monitoring severely accessible and spatially extended environments. Improved accuracy in flow monitoring is accomplished by minimizing image orthorectification and introducing highly visible particle tracers. Future developments will aim at the autonomy of the vehicle through machine learning procedures for unmanned monitoring in the environment.

  14. Deformation mechanisms of idealised cermets under multi-axial loading

    NASA Astrophysics Data System (ADS)

    Bele, E.; Goel, A.; Pickering, E. G.; Borstnar, G.; Katsamenis, O. L.; Pierron, F.; Danas, K.; Deshpande, V. S.

    2017-05-01

    The response of idealised cermets comprising approximately 60% by volume steel spheres in a Sn/Pb solder matrix is investigated under a range of axisymmetric compressive stress states. Digital volume correlation (DVC) anal`ysis of X-ray micro-computed tomography scans (μ-CT), and the measured macroscopic stress-strain curves of the specimens revealed two deformation mechanisms. At low triaxialities the deformation is granular in nature, with dilation occurring within shear bands. Under higher imposed hydrostatic pressures, the deformation mechanism transitions to a more homogeneous incompressible mode. However, DVC analyses revealed that under all triaxialities there are regions with local dilatory and compaction responses, with the magnitude of dilation and the number of zones wherein dilation occurs decreasing with increasing triaxiality. Two numerical models are presented in order to clarify these mechanisms: (i) a periodic unit cell model comprising nearly rigid spherical particles in a porous metal matrix and (ii) a discrete element model comprising a large random aggregate of spheres connected by non-linear normal and tangential "springs". The periodic unit cell model captured the measured stress-strain response with reasonable accuracy but under-predicted the observed dilation at the lower triaxialities, because the kinematic constraints imposed by the skeleton of rigid particles were not accurately accounted for in this model. By contrast, the discrete element model captured the kinematics and predicted both the overall levels of dilation and the simultaneous presence of both local compaction and dilatory regions with the specimens. However, the levels of dilation in this model are dependent on the assumed contact law between the spheres. Moreover, since the matrix is not explicitly included in the analysis, this model cannot be used to predict the stress-strain responses. These analyses have revealed that the complete constitutive response of cermets depends both on the kinematic constraints imposed by the particle aggregate skeleton, and the constraints imposed by the metal matrix filling the interstitial spaces in that skeleton.

  15. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  16. RSS Fingerprint Based Indoor Localization Using Sparse Representation with Spatio-Temporal Constraint

    PubMed Central

    Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun

    2016-01-01

    The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods. PMID:27827882

  17. Tunneling effects in electromagnetic wave scattering by nonspherical particles: A comparison of the Debye series and physical-geometric optics approximations

    NASA Astrophysics Data System (ADS)

    Bi, Lei; Yang, Ping

    2016-07-01

    The accuracy of the physical-geometric optics (PG-O) approximation is examined for the simulation of electromagnetic scattering by nonspherical dielectric particles. This study seeks a better understanding of the tunneling effect on the phase matrix by employing the invariant imbedding method to rigorously compute the zeroth-order Debye series, from which the tunneling efficiency and the phase matrix corresponding to the diffraction and external reflection are obtained. The tunneling efficiency is shown to be a factor quantifying the relative importance of the tunneling effect over the Fraunhofer diffraction near the forward scattering direction. Due to the tunneling effect, different geometries with the same projected cross section might have different diffraction patterns, which are traditionally assumed to be identical according to the Babinet principle. For particles with a fixed orientation, the PG-O approximation yields the external reflection pattern with reasonable accuracy, but ordinarily fails to predict the locations of peaks and minima in the diffraction pattern. The larger the tunneling efficiency, the worse the PG-O accuracy is at scattering angles less than 90°. If the particles are assumed to be randomly oriented, the PG-O approximation yields the phase matrix close to the rigorous counterpart, primarily due to error cancellations in the orientation-average process. Furthermore, the PG-O approximation based on an electric field volume-integral equation is shown to usually be much more accurate than the Kirchhoff surface integral equation at side-scattering angles, particularly when the modulus of the complex refractive index is close to unity. Finally, tunneling efficiencies are tabulated for representative faceted particles.

  18. Dense colloidal mixtures in an external sinusoidal potential

    NASA Astrophysics Data System (ADS)

    Capellmann, R. F.; Khisameeva, A.; Platten, F.; Egelhaaf, S. U.

    2018-03-01

    Concentrated binary colloidal mixtures containing particles with a size ratio 1:2.4 were exposed to a periodic potential that was realized using a light field, namely, two crossed laser beams creating a fringe pattern. The arrangement of the particles was recorded using optical microscopy and characterized in terms of the pair distribution function along the minima, the occupation probability perpendicular to the minima, the angular bond distribution, and the average potential energy per particle. The particle arrangement was investigated in dependence of the importance of particle-potential and particle-particle interactions by changing the potential amplitude and particle concentration, respectively. An increase in the potential amplitude leads to a stronger localization, especially of the large particles, but also results in an increasing fraction of small particles being located closer to the potential maxima, which also occurs upon increasing the particle density. Furthermore, increasing the potential amplitude induces a local demixing of the two particle species, whereas an increase in the total packing fraction favors a more homogeneous arrangement.

  19. Effect of modulation of the particle size distributions in the direct solid analysis by total-reflection X-ray fluorescence

    NASA Astrophysics Data System (ADS)

    Fernández-Ruiz, Ramón; Friedrich K., E. Josue; Redrejo, M. J.

    2018-02-01

    The main goal of this work was to investigate, in a systematic way, the influence of the controlled modulation of the particle size distribution of a representative solid sample with respect to the more relevant analytical parameters of the Direct Solid Analysis (DSA) by Total-reflection X-Ray Fluorescence (TXRF) quantitative method. In particular, accuracy, uncertainty, linearity and detection limits were correlated with the main parameters of their size distributions for the following elements; Al, Si, P, S, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Rb, Sr, Ba and Pb. In all cases strong correlations were finded. The main conclusion of this work can be resumed as follows; the modulation of particles shape to lower average sizes next to a minimization of the width of particle size distributions, produce a strong increment of accuracy, minimization of uncertainties and limit of detections for DSA-TXRF methodology. These achievements allow the future use of the DSA-TXRF analytical methodology for development of ISO norms and standardized protocols for the direct analysis of solids by mean of TXRF.

  20. Computationally efficient approach for solving time dependent diffusion equation with discrete temporal convolution applied to granular particles of battery electrodes

    NASA Astrophysics Data System (ADS)

    Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž

    2015-03-01

    The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.

  1. Improving the accuracy of sediment-associated constituent concentrations in whole storm water samples by wet-sieving

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.; Bowman, G.

    2007-01-01

    Sand-sized particles (>63 ??m) in whole storm water samples collected from urban runoff have the potential to produce data with substantial bias and/or poor precision both during sample splitting and laboratory analysis. New techniques were evaluated in an effort to overcome some of the limitations associated with sample splitting and analyzing whole storm water samples containing sand-sized particles. Wet-sieving separates sand-sized particles from a whole storm water sample. Once separated, both the sieved solids and the remaining aqueous (water suspension of particles less than 63 ??m) samples were analyzed for total recoverable metals using a modification of USEPA Method 200.7. The modified version digests the entire sample, rather than an aliquot, of the sample. Using a total recoverable acid digestion on the entire contents of the sieved solid and aqueous samples improved the accuracy of the derived sediment-associated constituent concentrations. Concentration values of sieved solid and aqueous samples can later be summed to determine an event mean concentration. ?? ASA, CSSA, SSSA.

  2. The Mini-SPT (Space Particle Telescope) for dual use: Precision flux measurement of low energy proton electron and heavy ion with tracking capability and A compact, low-cost realtime local radiation hazard/alarm detector to be used on board a satellite

    NASA Astrophysics Data System (ADS)

    Alpat, Behcet; Ergin, Tulun; Kalemci, Emrah

    2016-07-01

    The Mini-SPT project is the first, and most important, step towards the ambitious goal of creating a low-cost, compact, radiation hardened and high performance space particle telescope that can be mounted, in the near future, as standard particle detector on any satellite. Mini-SPT will be capable of providing high quality physics data on local space environment. In particular high precision flux measurement and tracking of low energy protons and electrons on different orbits with same instrumentation is of paramount importance for studies as geomagnetically trapped fluxes and space weather dynamics, dark matter search, low energy proton anisotropy and its effects on ICs as well as the solar protons studies. In addition, it will provide real-time "differentiable warnings" about the local space radiation hazard to other electronics systems on board the hosting satellite, including different criticality levels and alarm signals to activate mitigation techniques whenever this is strictly necessary to protect them from temporary/permanent failures. A real-time warning system will help satellite subsystems to save significant amount of power and memory with respect to other conventional techniques where the "mitigation" solutions are required to be active during entire mission life. The Mini-SPT will combine the use of technologies developed in cutting-edge high energy physics experiments (including technology from CMS experiments at CERN) and the development of new charged particle detecting systems for their use for the first time in space. The Mini-SPT essential objective is, by using for the first time in space SIPMs (Silicon Photomultipliers) technology for TOF and energy measurements, the production of high quality data with a good time, position and energy resolutions. The mini-SPT will consists of three main sub-units: a- A tracking and dE/dX measuring sub-detector which will be based on silicon pixel detectors (SPD) coupled to the rad-hard chip ROC-DIG (Read Out Chip-Digital version), developed and bump bonded to high accuracy radiation hardened particle barrel pixel detector in CMS (Compact Magnetic Solenoid) experiment of LHC (Large Hadron Collider) at CERN-Geneva b- The calorimeter (CCAL) system will consist of a scintillating crystal optically coupled to an array of silicon photomultipliers (SIPMs) to read out the photons created in the crystal by impinging charged particles. c- The TOF and associated trigger compose the third detecting sub-unit of the Mini-SPT , consisting basically of two small (~2 cm diameter) plastic scintillator layers The challenge is to develop a high performing scientific payload to fit in 6U Cubesat format with very good electron, proton and heavier particles separation as well as direct energy spectra measurement for protons up to almost 1 GeV and for electrons up to few tens of MeV. The angular acceptance of full mini-SPT payload is 6-5 degrees. If only tracking elements (SPDs) are considered the opening angle increases15 degrees.

  3. Measuring true localization accuracy in super resolution microscopy with DNA-origami nanostructures

    NASA Astrophysics Data System (ADS)

    Reuss, Matthias; Fördős, Ferenc; Blom, Hans; Öktem, Ozan; Högberg, Björn; Brismar, Hjalmar

    2017-02-01

    A common method to assess the performance of (super resolution) microscopes is to use the localization precision of emitters as an estimate for the achieved resolution. Naturally, this is widely used in super resolution methods based on single molecule stochastic switching. This concept suffers from the fact that it is hard to calibrate measures against a real sample (a phantom), because true absolute positions of emitters are almost always unknown. For this reason, resolution estimates are potentially biased in an image since one is blind to true position accuracy, i.e. deviation in position measurement from true positions. We have solved this issue by imaging nanorods fabricated with DNA-origami. The nanorods used are designed to have emitters attached at each end in a well-defined and highly conserved distance. These structures are widely used to gauge localization precision. Here, we additionally determined the true achievable localization accuracy and compared this figure of merit to localization precision values for two common super resolution microscope methods STED and STORM.

  4. The accuracy of tomographic particle image velocimetry for measurements of a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Atkinson, Callum; Coudert, Sebastien; Foucaut, Jean-Marc; Stanislas, Michel; Soria, Julio

    2011-04-01

    To investigate the accuracy of tomographic particle image velocimetry (Tomo-PIV) for turbulent boundary layer measurements, a series of synthetic image-based simulations and practical experiments are performed on a high Reynolds number turbulent boundary layer at Reθ = 7,800. Two different approaches to Tomo-PIV are examined using a full-volume slab measurement and a thin-volume "fat" light sheet approach. Tomographic reconstruction is performed using both the standard MART technique and the more efficient MLOS-SMART approach, showing a 10-time increase in processing speed. Random and bias errors are quantified under the influence of the near-wall velocity gradient, reconstruction method, ghost particles, seeding density and volume thickness, using synthetic images. Experimental Tomo-PIV results are compared with hot-wire measurements and errors are examined in terms of the measured mean and fluctuating profiles, probability density functions of the fluctuations, distributions of fluctuating divergence through the volume and velocity power spectra. Velocity gradients have a large effect on errors near the wall and also increase the errors associated with ghost particles, which convect at mean velocities through the volume thickness. Tomo-PIV provides accurate experimental measurements at low wave numbers; however, reconstruction introduces high noise levels that reduces the effective spatial resolution. A thinner volume is shown to provide a higher measurement accuracy at the expense of the measurement domain, albeit still at a lower effective spatial resolution than planar and Stereo-PIV.

  5. Potential accuracy of methods of laser Doppler anemometry in the single-particle scattering mode

    NASA Astrophysics Data System (ADS)

    Sobolev, V. S.; Kashcheeva, G. A.

    2017-05-01

    Potential accuracy of methods of laser Doppler anemometry is determined for the singleparticle scattering mode where the only disturbing factor is shot noise generated by the optical signal itself. The problem is solved by means of computer simulations with the maximum likelihood method. The initial parameters of simulations are chosen to be the number of real or virtual interference fringes in the measurement volume of the anemometer, the signal discretization frequency, and some typical values of the signal/shot noise ratio. The parameters to be estimated are the Doppler frequency as the basic parameter carrying information about the process velocity, the signal amplitude containing information about the size and concentration of scattering particles, and the instant when the particles arrive at the center of the measurement volume of the anemometer, which is needed for reconstruction of the examined flow velocity as a function of time. The estimates obtained in this study show that shot noise produces a minor effect (0.004-0.04%) on the frequency determination accuracy in the entire range of chosen values of the initial parameters. For the signal amplitude and the instant when the particles arrive at the center of the measurement volume of the anemometer, the errors induced by shot noise are in the interval of 0.2-3.5%; if the number of interference fringes is sufficiently large (more than 20), the errors do not exceed 0.2% regardless of the shot noise level.

  6. A technique to measure the size of particles in laser Doppler velocimetry applications

    NASA Technical Reports Server (NTRS)

    Hess, C. F.

    1985-01-01

    A method to measure the size of particles in Laser Doppler Velocimeter (LDV) applications is discussed. Since in LDV the velocity of the flow is assocated with the velocity of particles to establish how well they follow the flow, in the present method the interferometric probe volume is surrounded by a larger beam of different polarization or wavelength. The particle size is then measured from the absolute intensity scattered from the large beam by particles crossing the fringes. Experiments using polystrene particles between 1.1 and 3.3 microns and larger glass beads are reported. It is shown that the method has an excellent size resolution and its accuracy is better than 10% for the particle size studied.

  7. An Improved Compressive Sensing and Received Signal Strength-Based Target Localization Algorithm with Unknown Target Population for Wireless Local Area Networks.

    PubMed

    Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang

    2017-05-30

    In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.

  8. Accuracy of colonoscopy in localizing colonic cancer.

    PubMed

    Stanciu, C; Trifan, Anca; Khder, Saad Alla

    2007-01-01

    It is important to establish the precise localization of colonic cancer preoperatively; while colonoscopy is regarded as the diagnostic gold standard for colorectal cancer, its ability to localize the tumor is less reliable. To define the accuracy of colonoscopy in identifying the location of colonic cancer. All of the patients who had a colorectal cancer diagnosed by colonoscopy at the Institute of Gastroenterology and Hepatology, Iaşi and subsequently received a surgical intervention at three teaching hospitals in Iaşi, between January 2001 and December 2005, were included in this study. Endoscopic records and operative notes were carefully reviewed, and tumor localization was recorded. There were 161 patients (89 men, 72 women, aged 61.3 +/- 12.8 years) who underwent conventional surgery for colon cancer detected by colonoscopy during the study period. Twenty-two patients (13.66%) had erroneous colonoscopic localization of the tumors. The overall accuracy of preoperative colonoscopic localization was 87.58%. Colonoscopy is an accurate, reliable method for locating colon cancer, although additional techniques (i.e., endoscopic tattooing) should be performed at least for small lesions.

  9. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    NASA Astrophysics Data System (ADS)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.

  10. Local Interactions of Hydrometeors by Diffusion in Mixed-Phase Clouds

    NASA Astrophysics Data System (ADS)

    Baumgartner, Manuel; Spichtinger, Peter

    2017-04-01

    Mixed-phase clouds, containing both ice particles and liquid droplets, are important for the Earth-Atmosphere system. They modulate the radiation budget by a combination of albedo effect and greenhouse effect. In contrast to liquid water clouds, the radiative impact of clouds containing ice particles is still uncertain. Scattering and absorption highly depends in microphysical properties of ice crystals, e.g. size and shape. In addition, most precipitation on Earth forms via the ice phase. Thus, better understanding of ice processes as well as their representation in models is required. A key process for determining shape and size of ice crystals is diffusional growth. Diffusion processes in mixed-phase clouds are highly uncertain; in addition they are usually highly simplified in cloud models, especially in bulk microphysics parameterizations. The direct interaction between cloud droplets and ice particles, due to spatial inhomogeneities, is ignored; the particles can only interact via their environmental conditions. Local effects as supply of supersaturation due to clusters of droplets around ice particles are usually not represented, although they form the physical basis of the Wegener-Bergeron-Findeisen process. We present direct numerical simulations of the interaction of single ice particles and droplets, especially their local competition for the available water vapor. In addition, we show an approach to parameterize local interactions by diffusion. The suggested parameterization uses local steady-state solutions of the diffusion equations for water vapor for an ice particle as well as a droplet. The individual solutions are coupled together to obtain the desired interaction. We show some results of the scheme as implemented in a parcel model.

  11. Experimental test of quantum nonlocality in three-photon Greenberger-Horne-Zeilinger entanglement

    PubMed

    Pan; Bouwmeester; Daniell; Weinfurter; Zeilinger

    2000-02-03

    Bell's theorem states that certain statistical correlations predicted by quantum physics for measurements on two-particle systems cannot be understood within a realistic picture based on local properties of each individual particle-even if the two particles are separated by large distances. Einstein, Podolsky and Rosen first recognized the fundamental significance of these quantum correlations (termed 'entanglement' by Schrodinger) and the two-particle quantum predictions have found ever-increasing experimental support. A more striking conflict between quantum mechanical and local realistic predictions (for perfect correlations) has been discovered; but experimental verification has been difficult, as it requires entanglement between at least three particles. Here we report experimental confirmation of this conflict, using our recently developed method to observe three-photon entanglement, or 'Greenberger-Horne-Zeilinger' (GHZ) states. The results of three specific experiments, involving measurements of polarization correlations between three photons, lead to predictions for a fourth experiment; quantum physical predictions are mutually contradictory with expectations based on local realism. We find the results of the fourth experiment to be in agreement with the quantum prediction and in striking conflict with local realism.

  12. Investigations of interference between electromagnetic transponders and wireless MOSFET dosimeters: A phantom study

    PubMed Central

    Su, Zhong; Zhang, Lisha; Ramakrishnan, V.; Hagan, Michael; Anscher, Mitchell

    2011-01-01

    Purpose: To evaluate both the Calypso Systems’ (Calypso Medical Technologies, Inc., Seattle, WA) localization accuracy in the presence of wireless metal–oxide–semiconductor field-effect transistor (MOSFET) dosimeters of dose verification system (DVS, Sicel Technologies, Inc., Morrisville, NC) and the dosimeters’ reading accuracy in the presence of wireless electromagnetic transponders inside a phantom.Methods: A custom-made, solid-water phantom was fabricated with space for transponders and dosimeters. Two inserts were machined with positioning grooves precisely matching the dimensions of the transponders and dosimeters and were arranged in orthogonal and parallel orientations, respectively. To test the transponder localization accuracy with∕without presence of dosimeters (hypothesis 1), multivariate analyses were performed on transponder-derived localization data with and without dosimeters at each preset distance to detect statistically significant localization differences between the control and test sets. To test dosimeter dose-reading accuracy with∕without presence of transponders (hypothesis 2), an approach of alternating the transponder presence in seven identical fraction dose (100 cGy) deliveries and measurements was implemented. Two-way analysis of variance was performed to examine statistically significant dose-reading differences between the two groups and the different fractions. A relative-dose analysis method was also used to evaluate transponder impact on dose-reading accuracy after dose-fading effect was removed by a second-order polynomial fit.Results: Multivariate analysis indicated that hypothesis 1 was false; there was a statistically significant difference between the localization data from the control and test sets. However, the upper and lower bounds of the 95% confidence intervals of the localized positional differences between the control and test sets were less than 0.1 mm, which was significantly smaller than the minimum clinical localization resolution of 0.5 mm. For hypothesis 2, analysis of variance indicated that there was no statistically significant difference between the dosimeter readings with and without the presence of transponders. Both orthogonal and parallel configurations had difference of polynomial-fit dose to measured dose values within 1.75%.Conclusions: The phantom study indicated that the Calypso System’s localization accuracy was not affected clinically due to the presence of DVS wireless MOSFET dosimeters and the dosimeter-measured doses were not affected by the presence of transponders. Thus, the same patients could be implanted with both transponders and dosimeters to benefit from improved accuracy of radiotherapy treatments offered by conjunctional use of the two systems. PMID:21776780

  13. Local classifier weighting by quadratic programming.

    PubMed

    Cevikalp, Hakan; Polikar, Robi

    2008-10-01

    It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures -- many of which are heuristic in nature -- have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.

  14. Plasmonic Resonances for Spectroscopy Applications using 3D Finite-Difference Time-Domain Models

    NASA Astrophysics Data System (ADS)

    Ravi, Aruna

    Tuning plasmonic extinction resonances of sub-wavelength scale structures is essential to achieve maximum sensitivity and accuracy. These resonances can be controlled with careful design of nanoparticle geometries and incident wave attributes. In the first part of this dissertation, plasmonically enhanced effects on hexagonal-arrays of metal nanoparticles, metal-hole arrays (micro-mesh), and linear-arrays of metal nanorings are analyzed using three-dimensional Finite-Difference Time-Domain (3D-FDTD) simulations. The effect of particle size, lattice spacing, and lack of monodispersity of a self-assembled, hexagonal array layer of silver (Ag) nanoparticles on the extinction resonance is investigated to help determine optimal design specifications for efficient organic solar power harvesting. The enhancement of transmission resonances using plasmonic thin metal films with arrays of holes which enable recording of scatter-free infrared (IR) transmission spectra of individual particles is also explored. This method is quantitative, non-destructive and helps in better understanding the interaction of light with sub-wavelength particles. Next, plasmonically enhanced effects on linear arrays of gold (Au) rings are studied. Simulations employing 3D-FDTD can be used to determine the set of geometrical parameters to attain localized surface plasmon resonance (LSPR). The shifts in resonances due to changes in the effective dielectric of the structure are investigated, which is useful in sensing applications. Computational models enrich experimental studies. In the second part of this dissertation, the effect of particle size, shape and orientation on the IR spectra is investigated using 3D-FDTD and Mie-Bruggeman models. This computational analysis is extended to include clusters of particles of mixed composition. The prediction of extinction and absorption spectra of single particles of mixed composition helps in interpreting their physical properties and predict chemical composition. The chemical composition of respirable particles is of great interest from health, atmospheric, and environmental perspectives. Different environments may pose different hazards and spectroscopic challenges. Common mineral components of airborne and atmospheric dust samples have strong IR transitions with wavelengths that match particle size, giving rise to interesting lineshape distortions. These models enable the determination of volume fractions of components in individual particles that are mixtures of many materials, as are the dust particles inhaled into people's lungs.

  15. Accurate Energies and Orbital Description in Semi-Local Kohn-Sham DFT

    NASA Astrophysics Data System (ADS)

    Lindmaa, Alexander; Kuemmel, Stephan; Armiento, Rickard

    2015-03-01

    We present our progress on a scheme in semi-local Kohn-Sham density-functional theory (KS-DFT) for improving the orbital description while still retaining the level of accuracy of the usual semi-local exchange-correlation (xc) functionals. DFT is a widely used tool for first-principles calculations of properties of materials. A given task normally requires a balance of accuracy and computational cost, which is well achieved with semi-local DFT. However, commonly used semi-local xc functionals have important shortcomings which often can be attributed to features of the corresponding xc potential. One shortcoming is an overly delocalized representation of localized orbitals. Recently a semi-local GGA-type xc functional was constructed to address these issues, however, it has the trade-off of lower accuracy of the total energy. We discuss the source of this error in terms of a surplus energy contribution in the functional that needs to be accounted for, and offer a remedy for this issue which formally stays within KS-DFT, and, which does not harshly increase the computational effort. The end result is a scheme that combines accurate total energies (e.g., relaxed geometries) with an improved orbital description (e.g., improved band structure).

  16. Developing Local Oral Reading Fluency Cut Scores for Predicting High-Stakes Test Performance

    ERIC Educational Resources Information Center

    Grapin, Sally L.; Kranzler, John H.; Waldron, Nancy; Joyce-Beaulieu, Diana; Algina, James

    2017-01-01

    This study evaluated the classification accuracy of a second grade oral reading fluency curriculum-based measure (R-CBM) in predicting third grade state test performance. It also compared the long-term classification accuracy of local and publisher-recommended R-CBM cut scores. Participants were 266 students who were divided into a calibration…

  17. The local strength of individual alumina particles

    NASA Astrophysics Data System (ADS)

    Pejchal, Václav; Fornabaio, Marta; Žagar, Goran; Mortensen, Andreas

    2017-12-01

    We implement the C-shaped sample test method and micro-cantilever beam testing to measure the local strength of microscopic, low-aspect-ratio ceramic particles, namely high-purity vapor grown α-alumina Sumicorundum® particles 15-30 μm in diameter, known to be attractive reinforcing particles for aluminum. Individual particles are shaped by focused ion beam micromachining so as to probe in tension a portion of the particle surface that is left unaffected by ion-milling. Mechanical testing of C-shaped specimens is done ex-situ using a nanoindentation apparatus, and in the SEM using an in-situ nanomechanical testing system for micro-cantilever beams. The strength is evaluated for each individual specimen using bespoke finite element simulation. Results show that, provided the particle surface is free of readily observable defects such as pores, twins or grain boundaries and their associated grooves, the particles can achieve local strength values that approach those of high-perfection single-crystal alumina whiskers, on the order of 10 GPa, outperforming high-strength nanocrystalline alumina fibers and nano-thick alumina platelets used in bio-inspired composites. It is also shown that by far the most harmful defects are grain boundaries, leading to the general conclusion that alumina particles must be single-crystalline or alternatively nanocrystalline to fully develop their potential as a strong reinforcing phase in composite materials.

  18. Plasmonic Library Based on Substrate-Supported Gradiential Plasmonic Arrays

    PubMed Central

    2014-01-01

    We present a versatile approach to produce macroscopic, substrate-supported arrays of plasmonic nanoparticles with well-defined interparticle spacing and a continuous particle size gradient. The arrays thus present a “plasmonic library” of locally noncoupling plasmonic particles of different sizes, which can serve as a platform for future combinatorial screening of size effects. The structures were prepared by substrate assembly of gold-core/poly(N-isopropylacrylamide)-shell particles and subsequent post-modification. Coupling of the localized surface plasmon resonance (LSPR) could be avoided since the polymer shell separates the encapsulated gold cores. To produce a particle array with a broad range of well-defined but laterally distinguishable particle sizes, the substrate was dip-coated in a growth solution, which resulted in an overgrowth of the gold cores controlled by the local exposure time. The kinetics was quantitatively analyzed and found to be diffusion rate controlled, allowing for precise tuning of particle size by adjusting the withdrawal speed. We determined the kinetics of the overgrowth process, investigated the LSPRs along the gradient by UV–vis extinction spectroscopy, and compared the spectroscopic results to the predictions from Mie theory, indicating the absence of local interparticle coupling. We finally discuss potential applications of these substrate-supported plasmonic particle libraries and perspectives toward extending the concept from size to composition variation and screening of plasmonic coupling effects. PMID:25137554

  19. Dosimetry of heavy ions by use of CCD detectors

    NASA Technical Reports Server (NTRS)

    Schott, J. U.

    1994-01-01

    The design and the atomic composition of Charge Coupled Devices (CCD's) make them unique for investigations of single energetic particle events. As detector system for ionizing particles they detect single particles with local resolution and near real time particle tracking. In combination with its properties as optical sensor, particle transversals of single particles are to be correlated to any objects attached to the light sensitive surface of the sensor by simple imaging of their shadow and subsequent image analysis of both, optical image and particle effects, observed in affected pixels. With biological objects it is possible for the first time to investigate effects of single heavy ions in tissue or extinguished organs of metabolizing (i.e. moving) systems with a local resolution better than 15 microns. Calibration data for particle detection in CCD's are presented for low energetic protons and heavy ions.

  20. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  1. An investigation of the motion of small particles as related to the formulation of zero gravity experiments. [experimental design using laser doppler velocimetry

    NASA Technical Reports Server (NTRS)

    Sastry, V. S.

    1980-01-01

    The nature of Brownian motion and historical theoretical investigations of the phenomemon are reviewed. The feasibility of using a laser anemometer to perform small particle experiments in an orbiting space laboratory was investigated using latex particles suspended in water in a plastic container. The optical equipment and the particle Doppler analysis processor are described. The values of the standard deviation obtained for the latex particle motion experiment were significantly large compared to corresponding velocity, therefore, their accuracy was suspect and no attempt was made to draw meaningful conclusions from the results.

  2. Evaluation of the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels using particle and heavy ion transport code system: PHITS.

    PubMed

    Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko

    2017-10-01

    We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. New smoke predictions for Alaska in NOAA’s National Air Quality Forecast Capability

    NASA Astrophysics Data System (ADS)

    Davidson, P. M.; Ruminski, M.; Draxler, R.; Kondragunta, S.; Zeng, J.; Rolph, G.; Stajner, I.; Manikin, G.

    2009-12-01

    Smoke from wildfire is an important component of fine particle pollution, which is responsible for tens of thousands of premature deaths each year in the US. In Alaska, wildfire smoke is the leading cause of poor air quality in summer. Smoke forecast guidance helps air quality forecasters and the public take steps to limit exposure to airborne particulate matter. A new smoke forecast guidance tool, built by a cross-NOAA team, leverages efforts of NOAA’s partners at the USFS on wildfire emissions information, and with EPA, in coordinating with state/local air quality forecasters. Required operational deployment criteria, in categories of objective verification, subjective feedback, and production readiness, have been demonstrated in experimental testing during 2008-2009, for addition to the operational products in NOAA's National Air Quality Forecast Capability. The Alaska smoke forecast tool is an adaptation of NOAA’s smoke predictions implemented operationally for the lower 48 states (CONUS) in 2007. The tool integrates satellite information on location of wildfires with weather (North American mesoscale model) and smoke dispersion (HYSPLIT) models to produce daily predictions of smoke transport for Alaska, in binary and graphical formats. Hour-by hour predictions at 12km grid resolution of smoke at the surface and in the column are provided each day by 13 UTC, extending through midnight next day. Forecast accuracy and reliability are monitored against benchmark criteria for accuracy and reliability. While wildfire activity in the CONUS is year-round, the intense wildfire activity in AK is limited to the summer. Initial experimental testing during summer 2008 was hindered by unusually limited wildfire activity and very cloudy conditions. In contrast, heavier than average wildfire activity during summer 2009 provided a representative basis (more than 60 days of wildfire smoke) for demonstrating required prediction accuracy. A new satellite observation product was developed for routine near-real time verification of these predictions. The footprint of the predicted smoke from identified fires is verified with satellite observations of the spatial extent of smoke aerosols (5km resolution). Based on geostationary aerosol optical depth measurements that provide good time resolution of the horizontal spatial extent of the plumes, these observations do not yield quantitative concentrations of smoke particles at the surface. Predicted surface smoke concentrations are consistent with the limited number of in situ observations of total fine particle mass from all sources; however they are much higher than predicted for most CONUS fires. To assess uncertainty associated with fire emissions estimates, sensitivity analyses are in progress.

  4. Dynein-Dependent Transport of nanos RNA in Drosophila Sensory Neurons Requires Rumpelstiltskin and the Germ Plasm Organizer Oskar

    PubMed Central

    Xu, Xin; Brechbiel, Jillian L.

    2013-01-01

    Intracellular mRNA localization is a conserved mechanism for spatially regulating protein production in polarized cells, such as neurons. The mRNA encoding the translational repressor Nanos (Nos) forms ribonucleoprotein (RNP) particles that are dendritically localized in Drosophila larval class IV dendritic arborization (da) neurons. In nos mutants, class IV da neurons exhibit reduced dendritic branching complexity, which is rescued by transgenic expression of wild-type nos mRNA but not by a localization-compromised nos derivative. While localization is essential for nos function in dendrite morphogenesis, the mechanism underlying the transport of nos RNP particles was unknown. We investigated the mechanism of dendritic nos mRNA localization by analyzing requirements for nos RNP particle motility in class IV da neuron dendrites through live imaging of fluorescently labeled nos mRNA. We show that dynein motor machinery components mediate transport of nos mRNA in proximal dendrites. Two factors, the RNA-binding protein Rumpelstiltskin and the germ plasm protein Oskar, which are required for diffusion/entrapment-mediated localization of nos during oogenesis, also function in da neurons for formation and transport of nos RNP particles. Additionally, we show that nos regulates neuronal function, most likely independent of its dendritic localization and function in morphogenesis. Our results reveal adaptability of localization factors for regulation of a target transcript in different cellular contexts. PMID:24027279

  5. Dynein-dependent transport of nanos RNA in Drosophila sensory neurons requires Rumpelstiltskin and the germ plasm organizer Oskar.

    PubMed

    Xu, Xin; Brechbiel, Jillian L; Gavis, Elizabeth R

    2013-09-11

    Intracellular mRNA localization is a conserved mechanism for spatially regulating protein production in polarized cells, such as neurons. The mRNA encoding the translational repressor Nanos (Nos) forms ribonucleoprotein (RNP) particles that are dendritically localized in Drosophila larval class IV dendritic arborization (da) neurons. In nos mutants, class IV da neurons exhibit reduced dendritic branching complexity, which is rescued by transgenic expression of wild-type nos mRNA but not by a localization-compromised nos derivative. While localization is essential for nos function in dendrite morphogenesis, the mechanism underlying the transport of nos RNP particles was unknown. We investigated the mechanism of dendritic nos mRNA localization by analyzing requirements for nos RNP particle motility in class IV da neuron dendrites through live imaging of fluorescently labeled nos mRNA. We show that dynein motor machinery components mediate transport of nos mRNA in proximal dendrites. Two factors, the RNA-binding protein Rumpelstiltskin and the germ plasm protein Oskar, which are required for diffusion/entrapment-mediated localization of nos during oogenesis, also function in da neurons for formation and transport of nos RNP particles. Additionally, we show that nos regulates neuronal function, most likely independent of its dendritic localization and function in morphogenesis. Our results reveal adaptability of localization factors for regulation of a target transcript in different cellular contexts.

  6. The dispersion of particles in a separated backward-facing step flow

    NASA Astrophysics Data System (ADS)

    Ruck, B.; Makiola, B.

    1991-05-01

    Flows in technical and natural circuits often involve a particulate phase. To measure the dynamics of suspended, naturally resident or artificially seeded particles in the flow, optical measuring techniques, e.g., laser Doppler anemometry (LDA) can be used advantageously. In this paper the dispersion of particles in a single-sided backward-facing step flow is investigated by LDA. The investigation is of relevance for both, two-phase flow problems in separated flows with the associated particle diameter range of 1-70 μm and the accuracy of LDA with tracer particles of different sizes. The latter is of interest for all LDA applications to measure continuous phase properties, where interest for experimental restraints require tracer diameters in the upper micrometer range, e.g., flame resistant particles for measurements inside reactors, cylinders, etc. For the experiments, a closed-loop wind tunnel with a step expansion was used. Part of this tunnel, the test section, was made of glass. The step had a height H=25 mm (channel height before the step 25 mm, after 50 mm, i.e., an expansion ratio of 2). The width of the channel was 500 mm. The length of the glass test section was chosen as 116 step heights. The wind tunnel, driven by a radial fan, allowed flow velocities up to 50 m/sec which is equivalent to ReH=105. Seeding was performed with particles of well-known size: 1, 15, 30, and 70 μm in diameter. As 1 μm tracers oil droplets were used, whereas for the upper micron range starch particles (density 1.500 kg/m3) were chosen. Starch particles have a spherical shape and are not soluble in cold water. Particle velocities were measured locally using a conventional 1-D LDA system. The measurements deliver the resultant ``flow'' field information stemming from different particle size classes. Thus, the particle behavior in the separated flow field can be resolved. The results show that with increasing particle size, the particle velocity field differs increasingly from the flow field of the continuous phase (inferred from the smallest tracers used). The velocity fluctuations successively decrease with increasing particle diameter. In separation zones, bigger particles have a lower mean velocity than smaller ones. The opposite holds for the streamwise portions of the particle velocity field, where bigger particles show a higher velocity. The measurements give detailed insight into the particle dynamics in separated flow regions. LDA-measured dividing streamlines and lines of zero velocity of different particle classes in the recirculation region have been plotted and compared. In LDA the use of tracer particles in the upper micrometer size range leads to erroneous determinations of continuous phase flow characteristics. It turned out that the dimensions of the measured recirculation zones are reduced with increasing particle diameter. The physical reasons for these findings (relaxation time of particles, Stokes numbers, etc.) are explained in detail.

  7. Lagrangian particles with mixing. I. Simulating scalar transport

    NASA Astrophysics Data System (ADS)

    Klimenko, A. Y.

    2009-06-01

    The physical similarity and mathematical equivalence of continuous diffusion and particle random walk forms one of the cornerstones of modern physics and the theory of stochastic processes. The randomly walking particles do not need to posses any properties other than location in physical space. However, particles used in many models dealing with simulating turbulent transport and turbulent combustion do posses a set of scalar properties and mixing between particle properties is performed to reflect the dissipative nature of the diffusion processes. We show that the continuous scalar transport and diffusion can be accurately specified by means of localized mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. Particles with scalar properties and localized mixing represent an alternative formulation for the process, which is selected to represent the continuous diffusion. Simulating diffusion by Lagrangian particles with mixing involves three main competing requirements: minimizing stochastic uncertainty, minimizing bias introduced by numerical diffusion, and preserving independence of particles. These requirements are analyzed for two limited cases of mixing between two particles and mixing between a large number of particles. The problem of possible dependences between particles is most complicated. This problem is analyzed using a coupled chain of equations that has similarities with Bogolubov-Born-Green-Kirkwood-Yvon chain in statistical physics. Dependences between particles can be significant in close proximity of the particles resulting in a reduced rate of mixing. This work develops further ideas introduced in the previously published letter [Phys. Fluids 19, 031702 (2007)]. Paper I of this work is followed by Paper II [Phys. Fluids 19, 065102 (2009)] where modeling of turbulent reacting flows by Lagrangian particles with localized mixing is specifically considered.

  8. Combined Loadings and Cross-Dimensional Loadings Timeliness of Presentation of Financial Statements of Local Government

    NASA Astrophysics Data System (ADS)

    Muda, I.; Dharsuky, A.; Siregar, H. S.; Sadalia, I.

    2017-03-01

    This study examines the pattern of readiness dimensional accuracy of financial statements of local government in North Sumatra with a routine pattern of two (2) months after the fiscal year ends and patterns of at least 3 (three) months after the fiscal year ends. This type of research is explanatory survey with quantitative methods. The population and the sample used is of local government officials serving local government financial reports. Combined Analysis And Cross-Loadings Loadings are used with statistical tools WarpPLS. The results showed that there was a pattern that varies above dimensional accuracy of the financial statements of local government in North Sumatra.

  9. Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo

    2015-01-01

    Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.

  10. Dose- and time-dependent gene expression alterations in prostate and colon cancer cells after in vitro exposure to carbon ion and X-irradiation

    PubMed Central

    Suetens, Annelies; Moreels, Marjan; Quintens, Roel; Soors, Els; Buset, Jasmine; Chiriotti, Sabina; Tabury, Kevin; Gregoire, Vincent; Baatout, Sarah

    2015-01-01

    Hadrontherapy is an advanced form of radiotherapy that uses beams of charged particles (such as protons and carbon ions). Compared with conventional radiotherapy, the main advantages of carbon ion therapy are the precise absorbed dose localization, along with an increased relative biological effectiveness (RBE). This high ballistic accuracy of particle beams deposits the maximal dose to the tumor, while damage to the surrounding healthy tissue is limited. Currently, hadrontherapy is being used for the treatment of specific types of cancer. Previous in vitro studies have shown that, under certain circumstances, exposure to charged particles may inhibit cell motility and migration. In the present study, we investigated the expression of four motility-related genes in prostate (PC3) and colon (Caco-2) cancer cell lines after exposure to different radiation types. Cells were irradiated with various absorbed doses (0, 0.5 and 2 Gy) of accelerated 13C-ions at the GANIL facility (Caen, France) or with X-rays. Clonogenic assays were performed to determine the RBE. RT-qPCR analysis showed dose- and time-dependent changes in the expression of CCDC88A, FN1, MYH9 and ROCK1 in both cell lines. However, whereas in PC3 cells the response to carbon ion irradiation was enhanced compared with X-irradiation, the effect was the opposite in Caco-2 cells, indicating cell-type–specific responses to the different radiation types. PMID:25190155

  11. Measurements of Nascent Soot Using a Cavity Attenauted Phase Shift (CAPS)-based Single Scattering Albedo Monitor

    NASA Astrophysics Data System (ADS)

    Freedman, A.; Onasch, T. B.; Renbaum-Wollf, L.; Lambe, A. T.; Davidovits, P.; Kebabian, P. L.

    2015-12-01

    Accurate, as compared to precise, measurement of aerosol absorption has always posed a significant problem for the particle radiative properties community. Filter-based instruments do not actually measure absorption but rather light transmission through the filter; absorption must be derived from this data using multiple corrections. The potential for matrix-induced effects is also great for organic-laden aerosols. The introduction of true in situ measurement instruments using photoacoustic or photothermal interferometric techniques represents a significant advance in the state-of-the-art. However, measurement artifacts caused by changes in humidity still represent a significant hurdle as does the lack of a good calibration standard at most measurement wavelengths. And, in the absence of any particle-based absorption standard, there is no way to demonstrate any real level of accuracy. We, along with others, have proposed that under the circumstance of low single scattering albedo (SSA), absorption is best determined by difference using measurement of total extinction and scattering. We discuss a robust, compact, field deployable instrument (the CAPS PMssa) that simultaneously measures airborne particle light extinction and scattering coefficients and thus the single scattering albedo (SSA) on the same sample volume. The extinction measurement is based on cavity attenuated phase shift (CAPS) techniques as employed in the CAPS PMex particle extinction monitor; scattering is measured using integrating nephelometry by incorporating a Lambertian integrating sphere within the sample cell. The scattering measurement is calibrated using the extinction measurement of non-absorbing particles. For small particles and low SSA, absorption can be measured with an accuracy of 6-8% at absorption levels as low as a few Mm-1. We present new results of the measurement of the mass absorption coefficient (MAC) of soot generated by an inverted methane diffusion flame at 630 nm. A value of 6.60 ±0.2 m2 g-1 was determined where the uncertainty refers to the precision of the measurement. The overall accuracy of the measurement, traceable to the properties of polystyrene latex particles, is estimated to be better than ±10%.

  12. An optical flow-based method for velocity field of fluid flow estimation

    NASA Astrophysics Data System (ADS)

    Głomb, Grzegorz; Świrniak, Grzegorz; Mroczka, Janusz

    2017-06-01

    The aim of this paper is to present a method for estimating flow-velocity vector fields using the Lucas-Kanade algorithm. The optical flow measurements are based on the Particle Image Velocimetry (PIV) technique, which is commonly used in fluid mechanics laboratories in both research institutes and industry. Common approaches for an optical characterization of velocity fields base on computation of partial derivatives of the image intensity using finite differences. Nevertheless, the accuracy of velocity field computations is low due to the fact that an exact estimation of spatial derivatives is very difficult in presence of rapid intensity changes in the PIV images, caused by particles having small diameters. The method discussed in this paper solves this problem by interpolating the PIV images using Gaussian radial basis functions. This provides a significant improvement in the accuracy of the velocity estimation but, more importantly, allows for the evaluation of the derivatives in intermediate points between pixels. Numerical analysis proves that the method is able to estimate even a separate vector for each particle with a 5× 5 px2 window, whereas a classical correlation-based method needs at least 4 particle images. With the use of a specialized multi-step hybrid approach to data analysis the method improves the estimation of the particle displacement far above 1 px.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsa Terzic, Gabriele Bassi

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less

  14. Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications

    PubMed Central

    Moccia, Antonio

    2014-01-01

    Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154

  15. Continuous time random walk with local particle-particle interaction

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Jiang, Guancheng

    2018-05-01

    The continuous time random walk (CTRW) is often applied to the study of particle motion in disordered media. Yet most such applications do not allow for particle-particle (walker-walker) interaction. In this paper, we consider a CTRW with particle-particle interaction; however, for simplicity, we restrain the interaction to be local. The generalized Chapman-Kolmogorov equation is modified by introducing a perturbation function that fluctuates around 1, which models the effect of interaction. Subsequently, a time-fractional nonlinear advection-diffusion equation is derived from this walking system. Under the initial condition of condensed particles at the origin and the free-boundary condition, we numerically solve this equation with both attractive and repulsive particle-particle interactions. Moreover, a Monte Carlo simulation is devised to verify the results of the above numerical work. The equation and the simulation unanimously predict that this walking system converges to the conventional one in the long-time limit. However, for systems where the free-boundary condition and long-time limit are not simultaneously satisfied, this convergence does not hold.

  16. Intelligent ensemble T-S fuzzy neural networks with RCDPSO_DM optimization for effective handling of complex clinical pathway variances.

    PubMed

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2013-07-01

    Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Coordinate alignment of combined measurement systems using a modified common points method

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Zhang, P.; Xiao, W.

    2018-03-01

    The co-ordinate metrology has been extensively researched for its outstanding advantages in measurement range and accuracy. The alignment of different measurement systems is usually achieved by integrating local coordinates via common points before measurement. The alignment errors would accumulate and significantly reduce the global accuracy, thus need to be minimized. In this thesis, a modified common points method (MCPM) is proposed to combine different traceable system errors of the cooperating machines, and optimize the global accuracy by introducing mutual geometric constraints. The geometric constraints, obtained by measuring the common points in individual local coordinate systems, provide the possibility to reduce the local measuring uncertainty whereby enhance the global measuring certainty. A simulation system is developed in Matlab to analyze the feature of MCPM using the Monto-Carlo method. An exemplary setup is constructed to verify the feasibility and efficiency of the proposed method associated with laser tracker and indoor iGPS systems. Experimental results show that MCPM could significantly improve the alignment accuracy.

  18. Particle trapping and manipulation using hollow beam with tunable size generated by thermal nonlinear optical effect

    NASA Astrophysics Data System (ADS)

    He, Bo; Cheng, Xuemei; Zhang, Hui; Chen, Haowei; Zhang, Qian; Ren, Zhaoyu; Ding, Shan; Bai, Jintao

    2018-05-01

    We report micron-sized particle trapping and manipulation using a hollow beam of tunable size, which was generated by cross-phase modulation via the thermal nonlinear optical effect in an ethanol medium. The results demonstrated that the particle can be trapped stably in air for hours and manipulated in millimeter range with micrometer-level accuracy by modulating the size of the hollow beam. The merits of flexibility in tuning the beam size and simplicity in operation give this method great potential for the in situ study of individual particles in air.

  19. A microwave imaging-based 3D localization algorithm for an in-body RF source as in wireless capsule endoscopes.

    PubMed

    Chandra, Rohit; Balasingham, Ilangko

    2015-01-01

    A microwave imaging-based technique for 3D localization of an in-body RF source is presented. Such a technique can be useful for localization of an RF source as in wireless capsule endoscopes for positioning of any abnormality in the gastrointestinal tract. Microwave imaging is used to determine the dielectric properties (relative permittivity and conductivity) of the tissues that are required for a precise localization. A 2D microwave imaging algorithm is used for determination of the dielectric properties. Calibration method is developed for removing any error due to the used 2D imaging algorithm on the imaging data of a 3D body. The developed method is tested on a simple 3D heterogeneous phantom through finite-difference-time-domain simulations. Additive white Gaussian noise at the signal-to-noise ratio of 30 dB is added to the simulated data to make them more realistic. The developed calibration method improves the imaging and the localization accuracy. Statistics on the localization accuracy are generated by randomly placing the RF source at various positions inside the small intestine of the phantom. The cumulative distribution function of the localization error is plotted. In 90% of the cases, the localization accuracy was found within 1.67 cm, showing the capability of the developed method for 3D localization.

  20. RIP-REMOTE INTERACTIVE PARTICLE-TRACER

    NASA Technical Reports Server (NTRS)

    Rogers, S. E.

    1994-01-01

    Remote Interactive Particle-tracing (RIP) is a distributed-graphics program which computes particle traces for computational fluid dynamics (CFD) solution data sets. A particle trace is a line which shows the path a massless particle in a fluid will take; it is a visual image of where the fluid is going. The program is able to compute and display particle traces at a speed of about one trace per second because it runs on two machines concurrently. The data used by the program is contained in two files. The solution file contains data on density, momentum and energy quantities of a flow field at discrete points in three-dimensional space, while the grid file contains the physical coordinates of each of the discrete points. RIP requires two computers. A local graphics workstation interfaces with the user for program control and graphics manipulation, and a remote machine interfaces with the solution data set and performs time-intensive computations. The program utilizes two machines in a distributed mode for two reasons. First, the data to be used by the program is usually generated on the supercomputer. RIP avoids having to convert and transfer the data, eliminating any memory limitations of the local machine. Second, as computing the particle traces can be computationally expensive, RIP utilizes the power of the supercomputer for this task. Although the remote site code was developed on a CRAY, it is possible to port this to any supercomputer class machine with a UNIX-like operating system. Integration of a velocity field from a starting physical location produces the particle trace. The remote machine computes the particle traces using the particle-tracing subroutines from PLOT3D/AMES, a CFD post-processing graphics program available from COSMIC (ARC-12779). These routines use a second-order predictor-corrector method to integrate the velocity field. Then the remote program sends graphics tokens to the local machine via a remote-graphics library. The local machine interprets the graphics tokens and draws the particle traces. The program is menu driven. RIP is implemented on the silicon graphics IRIS 3000 (local workstation) with an IRIX operating system and on the CRAY2 (remote station) with a UNICOS 1.0 or 2.0 operating system. The IRIS 4D can be used in place of the IRIS 3000. The program is written in C (67%) and FORTRAN 77 (43%) and has an IRIS memory requirement of 4 MB. The remote and local stations must use the same user ID. PLOT3D/AMES unformatted data sets are required for the remote machine. The program was developed in 1988.

  1. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  2. Linear response approach to active Brownian particles in time-varying activity fields

    NASA Astrophysics Data System (ADS)

    Merlitz, Holger; Vuijk, Hidde D.; Brader, Joseph; Sharma, Abhinav; Sommer, Jens-Uwe

    2018-05-01

    In a theoretical and simulation study, active Brownian particles (ABPs) in three-dimensional bulk systems are exposed to time-varying sinusoidal activity waves that are running through the system. A linear response (Green-Kubo) formalism is applied to derive fully analytical expressions for the torque-free polarization profiles of non-interacting particles. The activity waves induce fluxes that strongly depend on the particle size and may be employed to de-mix mixtures of ABPs or to drive the particles into selected areas of the system. Three-dimensional Langevin dynamics simulations are carried out to verify the accuracy of the linear response formalism, which is shown to work best when the particles are small (i.e., highly Brownian) or operating at low activity levels.

  3. Ab initio approach to the ion stopping power at the plasma-solid interface

    NASA Astrophysics Data System (ADS)

    Bonitz, Michael; Schlünzen, Niclas; Wulff, Lasse; Joost, Jan-Philip; Balzer, Karsten

    2016-10-01

    The energy loss of ions in solids is of key relevance for many applications of plasmas, ranging from plasma technology to fusion. Standard approaches are based on density functional theory or SRIM simulations, however, the applicability range and accuracy of these results are difficult to assess, in particular, for low energies. Here we present an independent approach that is based on ab initio nonequilibrium Green functions theory, e.g. that allows to incorporate electronic correlations effects of the solid. We present the first application of this method to low-temperature plasmas, concentrating on proton and alpha-particle stopping in a graphene layer. In addition to the stopping power we present time-dependent results for the local electron density, the spectral function and the photoemission spectrum that is directly accessible in optical, UV or x-ray diagnostics. http://www.itap.uni-kiel.de/theo-physik/bonitz/.

  4. New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems

    PubMed Central

    Li, Xiguang; Zhao, Liang; Gong, Changqing; Liu, Xiaojing

    2017-01-01

    Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA), is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM) for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent. PMID:29085425

  5. Fermi-level effects in semiconductor processing: A modeling scheme for atomistic kinetic Monte Carlo simulators

    NASA Astrophysics Data System (ADS)

    Martin-Bragado, I.; Castrillo, P.; Jaraiz, M.; Pinacho, R.; Rubio, J. E.; Barbolla, J.; Moroz, V.

    2005-09-01

    Atomistic process simulation is expected to play an important role for the development of next generations of integrated circuits. This work describes an approach for modeling electric charge effects in a three-dimensional atomistic kinetic Monte Carlo process simulator. The proposed model has been applied to the diffusion of electrically active boron and arsenic atoms in silicon. Several key aspects of the underlying physical mechanisms are discussed: (i) the use of the local Debye length to smooth out the atomistic point-charge distribution, (ii) algorithms to correctly update the charge state in a physically accurate and computationally efficient way, and (iii) an efficient implementation of the drift of charged particles in an electric field. High-concentration effects such as band-gap narrowing and degenerate statistics are also taken into account. The efficiency, accuracy, and relevance of the model are discussed.

  6. Studies of turbulence models in a computational fluid dynamics model of a blood pump.

    PubMed

    Song, Xinwei; Wood, Houston G; Day, Steven W; Olsen, Don B

    2003-10-01

    Computational fluid dynamics (CFD) is used widely in design of rotary blood pumps. The choice of turbulence model is not obvious and plays an important role on the accuracy of CFD predictions. TASCflow (ANSYS Inc., Canonsburg, PA, U.S.A.) has been used to perform CFD simulations of blood flow in a centrifugal left ventricular assist device; a k-epsilon model with near-wall functions was used in the initial numerical calculation. To improve the simulation, local grids with special distribution to ensure the k-omega model were used. Iterations have been performed to optimize the grid distribution and turbulence modeling and to predict flow performance more accurately comparing to experimental data. A comparison of k-omega model and experimental measurements of the flow field obtained by particle image velocimetry shows better agreement than k-epsilon model does, especially in the near-wall regions.

  7. Quantum oscillations in the kinetic energy density: Gradient corrections from the Airy gas

    NASA Astrophysics Data System (ADS)

    Lindmaa, Alexander; Mattsson, Ann E.; Armiento, Rickard

    2014-03-01

    We show how one can systematically derive exact quantum corrections to the kinetic energy density (KED) in the Thomas-Fermi (TF) limit of the Airy gas (AG). The resulting expression is of second order in the density variation and we demonstrate how it applies universally to a certain class of model systems in the slowly varying regime, for which the accuracy of the gradient corrections of the extended Thomas-Fermi (ETF) model is limited. In particular we study two kinds of related electronic edges, the Hermite gas (HG) and the Mathieu gas (MG), which are both relevant for discussing periodic systems. We also consider two systems with finite integer particle number, namely non-interacting electrons subject to harmonic confinement as well as the hydrogenic potential. Finally we discuss possible implications of our findings mainly related to the field of functional development of the local kinetic energy contribution.

  8. Enhancing Localized Evaporation through Separated Light Absorbing Centers and Scattering Centers

    PubMed Central

    Zhao, Dengwu; Duan, Haoze; Yu, Shengtao; Zhang, Yao; He, Jiaqing; Quan, Xiaojun; Tao, Peng; Shang, Wen; Wu, Jianbo; Song, Chengyi; Deng, Tao

    2015-01-01

    This report investigates the enhancement of localized evaporation via separated light absorbing particles (plasmonic absorbers) and scattering particles (polystyrene nanoparticles). Evaporation has been considered as one of the most important phase-change processes in modern industries. To improve the efficiency of evaporation, one of the most feasible methods is to localize heat at the top water layer rather than heating the bulk water. In this work, the mixture of purely light absorptive plasmonic nanostructures such as gold nanoparticles and purely scattering particles (polystyrene nanoparticles) are employed to confine the incident light at the top of the solution and convert light to heat. Different concentrations of both the light absorbing centers and the light scattering centers were evaluated and the evaporation performance can be largely enhanced with the balance between absorbing centers and scattering centers. The findings in this study not only provide a new way to improve evaporation efficiency in plasmonic particle-based solution, but also shed lights on the design of new solar-driven localized evaporation systems. PMID:26606898

  9. Local and regional components of aerosol in a heavily trafficked street canyon in central London derived from PMF and cluster analysis of single-particle ATOFMS spectra.

    PubMed

    Giorio, Chiara; Tapparo, Andrea; Dall'Osto, Manuel; Beddows, David C S; Esser-Gietl, Johanna K; Healy, Robert M; Harrison, Roy M

    2015-03-17

    Positive matrix factorization (PMF) has been applied to single particle ATOFMS spectra collected on a six lane heavily trafficked road in central London (Marylebone Road), which well represents an urban street canyon. PMF analysis successfully extracted 11 factors from mass spectra of about 700,000 particles as a complement to information on particle types (from K-means cluster analysis). The factors were associated with specific sources and represent the contribution of different traffic related components (i.e., lubricating oils, fresh elemental carbon, organonitrogen and aromatic compounds), secondary aerosol locally produced (i.e., nitrate, oxidized organic aerosol and oxidized organonitrogen compounds), urban background together with regional transport (aged elemental carbon and ammonium) and fresh sea spray. An important result from this study is the evidence that rapid chemical processes occur in the street canyon with production of secondary particles from road traffic emissions. These locally generated particles, together with aging processes, dramatically affected aerosol composition producing internally mixed particles. These processes may become important with stagnant air conditions and in countries where gasoline vehicles are predominant and need to be considered when quantifying the impact of traffic emissions.

  10. Dissolved oxygen content prediction in crab culture using a hybrid intelligent method

    PubMed Central

    Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang

    2016-01-01

    A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206

  11. Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.

    PubMed

    Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang

    2016-06-08

    A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.

  12. Rapid high performance liquid chromatography method development with high prediction accuracy, using 5cm long narrow bore columns packed with sub-2microm particles and Design Space computer modeling.

    PubMed

    Fekete, Szabolcs; Fekete, Jeno; Molnár, Imre; Ganzler, Katalin

    2009-11-06

    Many different strategies of reversed phase high performance liquid chromatographic (RP-HPLC) method development are used today. This paper describes a strategy for the systematic development of ultrahigh-pressure liquid chromatographic (UHPLC or UPLC) methods using 5cmx2.1mm columns packed with sub-2microm particles and computer simulation (DryLab((R)) package). Data for the accuracy of computer modeling in the Design Space under ultrahigh-pressure conditions are reported. An acceptable accuracy for these predictions of the computer models is presented. This work illustrates a method development strategy, focusing on time reduction up to a factor 3-5, compared to the conventional HPLC method development and exhibits parts of the Design Space elaboration as requested by the FDA and ICH Q8R1. Furthermore this paper demonstrates the accuracy of retention time prediction at elevated pressure (enhanced flow-rate) and shows that the computer-assisted simulation can be applied with sufficient precision for UHPLC applications (p>400bar). Examples of fast and effective method development in pharmaceutical analysis, both for gradient and isocratic separations are presented.

  13. A novel artificial immune clonal selection classification and rule mining with swarm learning model

    NASA Astrophysics Data System (ADS)

    Al-Sheshtawi, Khaled A.; Abdul-Kader, Hatem M.; Elsisi, Ashraf B.

    2013-06-01

    Metaheuristic optimisation algorithms have become popular choice for solving complex problems. By integrating Artificial Immune clonal selection algorithm (CSA) and particle swarm optimisation (PSO) algorithm, a novel hybrid Clonal Selection Classification and Rule Mining with Swarm Learning Algorithm (CS2) is proposed. The main goal of the approach is to exploit and explore the parallel computation merit of Clonal Selection and the speed and self-organisation merits of Particle Swarm by sharing information between clonal selection population and particle swarm. Hence, we employed the advantages of PSO to improve the mutation mechanism of the artificial immune CSA and to mine classification rules within datasets. Consequently, our proposed algorithm required less training time and memory cells in comparison to other AIS algorithms. In this paper, classification rule mining has been modelled as a miltiobjective optimisation problem with predictive accuracy. The multiobjective approach is intended to allow the PSO algorithm to return an approximation to the accuracy and comprehensibility border, containing solutions that are spread across the border. We compared our proposed algorithm classification accuracy CS2 with five commonly used CSAs, namely: AIRS1, AIRS2, AIRS-Parallel, CLONALG, and CSCA using eight benchmark datasets. We also compared our proposed algorithm classification accuracy CS2 with other five methods, namely: Naïve Bayes, SVM, MLP, CART, and RFB. The results show that the proposed algorithm is comparable to the 10 studied algorithms. As a result, the hybridisation, built of CSA and PSO, can develop respective merit, compensate opponent defect, and make search-optimal effect and speed better.

  14. Do all pure entangled states violate Bell's inequalities for correlation functions?

    PubMed

    Zukowski, Marek; Brukner, Caslav; Laskowski, Wiesław; Wieśniak, Marcin

    2002-05-27

    Any pure entangled state of two particles violates a Bell inequality for two-particle correlation functions (Gisin's theorem). We show that there exist pure entangled N>2 qubit states that do not violate any Bell inequality for N particle correlation functions for experiments involving two dichotomic observables per local measuring station. We also find that Mermin-Ardehali-Belinskii-Klyshko inequalities may not always be optimal for refutation of local realistic description.

  15. Proceedings of the Tri-Service Conference on Corrosion (1987)

    DTIC Science & Technology

    1987-05-01

    designated S1 and S2 exhibited preferential local attack. The corrosion in these alloys occur between tungsten particles where matrix alloy precipitated ...surrounded by a matrix alloy of Fe-Ni-W. The EDAX examination of the precipitated matrix alloy between the tungsten particles in sample K1 showed the...the precipitated matrix alloy between the tungsten particles . For alloy Sl, the corrosion was observed at preferential local sites. The SEM

  16. Non-locality of non-Abelian anyons

    NASA Astrophysics Data System (ADS)

    Brennen, G. K.; Iblisdir, S.; Pachos, J. K.; Slingerland, J. K.

    2009-10-01

    Entangled states of quantum systems can give rise to measurement correlations of separated observers that cannot be described by local hidden variable theories. Usually, it is assumed that entanglement between particles is generated due to some distance-dependent interaction. Yet anyonic particles in two dimensions have a nontrivial interaction that is purely topological in nature. In other words, it does not depend on the distance between two particles, but rather on their exchange history. The information encoded in anyons is inherently non-local even in the single subsystem level making the treatment of anyons non-conventional. We describe a protocol to reveal the non-locality of anyons in terms of correlations in the outcomes of measurements in two separated regions. This gives a clear operational measure of non-locality for anyonic states and it opens up the possibility to test Bell inequalities in quantum Hall liquids or spin lattices.

  17. Comparison of pelvic phased-array versus endorectal coil magnetic resonance imaging at 3 Tesla for local staging of prostate cancer.

    PubMed

    Kim, Bum Soo; Kim, Tae-Hwan; Kwon, Tae Gyun; Yoo, Eun Sang

    2012-05-01

    Several studies have demonstrated the superiority of endorectal coil magnetic resonance imaging (MRI) over pelvic phased-array coil MRI at 1.5 Tesla for local staging of prostate cancer. However, few have studied which evaluation is more accurate at 3 Tesla MRI. In this study, we compared the accuracy of local staging of prostate cancer using pelvic phased-array coil or endorectal coil MRI at 3 Tesla. Between January 2005 and May 2010, 151 patients underwent radical prostatectomy. All patients were evaluated with either pelvic phased-array coil or endorectal coil prostate MRI prior to surgery (63 endorectal coils and 88 pelvic phased-array coils). Tumor stage based on MRI was compared with pathologic stage. We calculated the specificity, sensitivity and accuracy of each group in the evaluation of extracapsular extension and seminal vesicle invasion. Both endorectal coil and pelvic phased-array coil MRI achieved high specificity, low sensitivity and moderate accuracy for the detection of extracapsular extension and seminal vesicle invasion. There were statistically no differences in specificity, sensitivity and accuracy between the two groups. Overall staging accuracy, sensitivity and specificity were not significantly different between endorectal coil and pelvic phased-array coil MRI.

  18. Establishment of a high accuracy geoid correction model and geodata edge match

    NASA Astrophysics Data System (ADS)

    Xi, Ruifeng

    This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.

  19. The mathematical modeling of the experiment on the determination of correlation coefficients in neutron beta-decay

    NASA Astrophysics Data System (ADS)

    Serebrov, A. P.; Zherebtsov, O. M.; Klyushnikov, G. N.

    2018-05-01

    An experiment on the measurement of the ratio of the axial coupling constant to the vector one is under development. The main idea of the experiment is to measure the values of A and B in the same setup. An additional measurement of the polarization is not necessary. The accuracy achieved to date in measuring λ is 2 × 10-3. It is expected that in the experiment the accuracy will be of the order of 10-4. Some particular problems of mathematical modeling concerning the experiment on the measurement of the ratio of the axial coupling constant to the vector one are considered. The force lines for the given tabular field of a magnetic trap are studied. The dependences of the longitudinal and transverse field non-uniformity coefficients on the coordinates are regarded. A special computational algorithm based on the law of a charged particle motion along a local magnetic force line is performed for the calculation of the electrons and protons motion time as well as for the evaluation of the total number of electrons colliding with the detector surface. The average values of the cosines of the angles with the coefficients of a, A and B have been estimated.

  20. Comparison of particle tracking algorithms in commercial CFD packages: sedimentation and diffusion.

    PubMed

    Robinson, Risa J; Snyder, Pam; Oldham, Michael J

    2007-05-01

    Computational fluid dynamic modeling software has enabled microdosimetry patterns of inhaled toxins and toxicants to be predicted and visualized, and is being used in inhalation toxicology and risk assessment. These predicted microdosimetry patterns in airway structures are derived from predicted airflow patterns within these airways and particle tracking algorithms used in computational fluid dynamics (CFD) software packages. Although these commercial CFD codes have been tested for accuracy under various conditions, they have not been well tested for respiratory flows in general. Nor has their particle tracking algorithm accuracy been well studied. In this study, three software packages, Fluent Discrete Phase Model (DPM), Fluent Fine Particle Model (FPM), and ANSYS CFX, were evaluated. Sedimentation and diffusion were each isolated in a straight tube geometry and tested for accuracy. A range of flow rates corresponding to adult low activity (minute ventilation = 10 L/min) and to heavy exertion (minute ventilation = 60 L/min) were tested by varying the range of dimensionless diffusion and sedimentation parameters found using the Weibel symmetric 23 generation lung morphology. Numerical results for fully developed parabolic and uniform (slip) profiles were compared respectively, to Pich (1972) and Yu (1977) analytical sedimentation solutions. Schum and Yeh (1980) equations for sedimentation were also compared. Numerical results for diffusional deposition were compared to analytical solutions of Ingham (1975) for parabolic and uniform profiles. Significant differences were found among the various CFD software packages and between numerical and analytical solutions. Therefore, it is prudent to validate CFD predictions against analytical solutions in idealized geometry before tackling the complex geometries of the respiratory tract.

  1. Effect of Particle Size and Impact Velocity on Collision Behaviors Between Nano-Scale TiN Particles: MD Simulation.

    PubMed

    Yao, Hai-Long; Hu, Xiao-Zhen; Yang, Guan-Jun

    2018-06-01

    Inter-particle bonding formation which determines qualities of nano-scale ceramic coatings is influenced by particle collision behaviors during high velocity collision processes. In this study, collision behaviors between nano-scale TiN particles with different diameters were illuminated by using Molecular Dynamics simulation through controlling impact velocities. Results show that nano-scale TiN particles exhibit three states depending on particle sizes and impact velocities, i.e., bonding, bonding with localized fracturing, and rebounding. These TiN particles states are summarized into a parameter selection map providing an overview of the conditions in terms of particle sizes and velocities. Microstructure results show that localized atoms displacement and partial fracture around the impact region are main reasons for bonding formation of nano-scale ceramic particles, which shows differences from conventional particles refining and amorphization. A relationship between the adhesion energy and the rebound energy is established to understand bonding formation mechanism for nano-scale TiN particle collision. Results show that the energy relationship is depended on the particle sizes and impact velocities, and nano-scale ceramic particles can be bonded together as the adhesion energy being higher than the rebound energy.

  2. A Low Complexity System Based on Multiple Weighted Decision Trees for Indoor Localization

    PubMed Central

    Sánchez-Rodríguez, David; Hernández-Morera, Pablo; Quinteiro, José Ma.; Alonso-González, Itziar

    2015-01-01

    Indoor position estimation has become an attractive research topic due to growing interest in location-aware services. Nevertheless, satisfying solutions have not been found with the considerations of both accuracy and system complexity. From the perspective of lightweight mobile devices, they are extremely important characteristics, because both the processor power and energy availability are limited. Hence, an indoor localization system with high computational complexity can cause complete battery drain within a few hours. In our research, we use a data mining technique named boosting to develop a localization system based on multiple weighted decision trees to predict the device location, since it has high accuracy and low computational complexity. The localization system is built using a dataset from sensor fusion, which combines the strength of radio signals from different wireless local area network access points and device orientation information from a digital compass built-in mobile device, so that extra sensors are unnecessary. Experimental results indicate that the proposed system leads to substantial improvements on computational complexity over the widely-used traditional fingerprinting methods, and it has a better accuracy than they have. PMID:26110413

  3. Enhanced THz extinction in arrays of resonant semiconductor particles.

    PubMed

    Schaafsma, Martijn C; Georgiou, Giorgos; Rivas, Jaime Gómez

    2015-09-21

    We demonstrate experimentally the enhanced THz extinction by periodic arrays of resonant semiconductor particles. This phenomenon is explained in terms of the radiative coupling of localized resonances with diffractive orders in the plane of the array (Rayleigh anomalies). The experimental results are described by numerical calculations using a coupled dipole model and by Finite-Difference in Time-Domain simulations. An optimum particle size for enhancing the extinction efficiency of the array is found. This optimum is determined by the frequency detuning between the localized resonances in the individual particles and the Rayleigh anomaly. The extinction calculations and measurements are also compared to near-field simulations illustrating the optimum particle size for the enhancement of the near-field.

  4. Duality in Power-Law Localization in Disordered One-Dimensional Systems

    NASA Astrophysics Data System (ADS)

    Deng, X.; Kravtsov, V. E.; Shlyapnikov, G. V.; Santos, L.

    2018-03-01

    The transport of excitations between pinned particles in many physical systems may be mapped to single-particle models with power-law hopping, 1 /ra . For randomly spaced particles, these models present an effective peculiar disorder that leads to surprising localization properties. We show that in one-dimensional systems almost all eigenstates (except for a few states close to the ground state) are power-law localized for any value of a >0 . Moreover, we show that our model is an example of a new universality class of models with power-law hopping, characterized by a duality between systems with long-range hops (a <1 ) and short-range hops (a >1 ), in which the wave function amplitude falls off algebraically with the same power γ from the localization center.

  5. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Mathematical Analysis of a Coarsening Model with Local Interactions

    NASA Astrophysics Data System (ADS)

    Helmers, Michael; Niethammer, Barbara; Velázquez, Juan J. L.

    2016-10-01

    We consider particles on a one-dimensional lattice whose evolution is governed by nearest-neighbor interactions where particles that have reached size zero are removed from the system. Concentrating on configurations with infinitely many particles, we prove existence of solutions under a reasonable density assumption on the initial data and show that the vanishing of particles and the localized interactions can lead to non-uniqueness. Moreover, we provide a rigorous upper coarsening estimate and discuss generic statistical properties as well as some non-generic behavior of the evolution by means of heuristic arguments and numerical observations.

  7. Measurements of the properties of solar wind plasma relevant to studies of its coronal sources

    NASA Technical Reports Server (NTRS)

    Neugebauer, M.

    1982-01-01

    Interplanetary measurements of the speeds, densities, abundances, and charge states of solar wind ions are diagnostic of conditions in the source region of the solar wind. The absolute values of the mass, momentum, and energy fluxes in the solar wind are not known to an accuracy of 20%. The principal limitations on the absolute accuracies of observations of solar wind protons and alpha particles arise from uncertain instrument calibrations, from the methods used to reduce the data, and from sampling biases. Sampling biases are very important in studies of alpha particles. Instrumental resolution and measurement ambiguities are additional major problems for the observation of ions heavier than helium. Progress in overcoming some of these measurement inadequacies is reviewed.

  8. Phonon-particle coupling effects in odd-even mass differences of semi-magic nuclei

    NASA Astrophysics Data System (ADS)

    Saperstein, E. E.; Baldo, M.; Pankratov, S. S.; Tolokonnikov, S. V.

    2017-11-01

    A method to evaluate the particle-phonon coupling (PC) corrections to the single-particle energies in semi-magic nuclei, based on a direct solving the Dyson equation with PC corrected mass operator, is used for finding the odd-even mass difference between 18 even Pb isotopes and their odd-proton neighbors. The Fayans energy density functional (EDF) DF3-a is used which gives rather high accuracy of the predictions for these mass differences already on the mean-field level, with the average deviation from the existing experimental data equal to 0.389 MeV. It is only a bit worse than the corresponding value of 0.333 MeV for the Skyrme EDF HFB-17, which belongs to a family of Skyrme EDFs with the highest overall accuracy in describing the nuclear masses. Account for the PC corrections induced by the low-laying phonons 2 1 + and 3 1 - significantly diminishes the deviation of the theory from the data till 0.218 MeV.

  9. Morphologies and elemental compositions of local biomass burning particles at urban and glacier sites in southeastern Tibetan Plateau: Results from an expedition in 2010.

    PubMed

    Hu, Tafeng; Cao, Junji; Zhu, Chongshu; Zhao, Zhuzi; Liu, Suixin; Zhang, Daizhou

    2018-07-01

    Many studies indicate that the atmospheric environment over the southern part of the Tibetan Plateau is influenced by aged biomass burning particles that are transported over long distances from South Asia. However, our knowledge of the particles emitted locally (within the plateau region) is poor. We collected aerosol particles at four urban sites and one remote glacier site during a scientific expedition to the southeastern Tibetan Plateau in spring 2010. Weather and backward trajectory analyses indicated that the particles we collected were more likely dominated by particles emitted within the plateau. The particles were examined using an electron microscope and identified according to their sizes, shapes and elemental compositions. At three urban sites where the anthropogenic particles were produced mainly by the burning of firewood, soot aggregates were in the majority and made up >40% of the particles by number. At Lhasa, the largest city on the Tibetan Plateau, tar balls and mineral particles were also frequently observed because of the use of coal and natural gas, in addition to biofuel. In contrast, at the glacier site, large numbers of chain-like soot aggregates (~25% by number) were noted. The morphologies of these aggregates were similar to those of freshly emitted ones at the urban sites; moreover, physically or chemically processed ageing was rarely confirmed. These limited observations suggest that the biomass burning particles age slowly in the cold, dry plateau air. Anthropogenic particles emitted locally within the elevated plateau region may thus affect the environment within glaciated areas in Tibet differently than anthropogenic particles transported from South Asia. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  11. Sensor-Based Electromagnetic Navigation (Mediguide®): How Accurate Is It? A Phantom Model Study.

    PubMed

    Bourier, Felix; Reents, Tilko; Ammar-Busch, Sonia; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Brkic, Amir; Semmler, Verena; Lennerz, Carsten; Kaess, Bernhard; Kottmaier, Marc; Kolb, Christof; Deisenhofer, Isabel; Hessling, Gabriele

    2015-10-01

    Data about localization reproducibility as well as spatial and visual accuracy of the new MediGuide® sensor-based electroanatomic navigation technology are scarce. We therefore sought to quantify these parameters based on phantom experiments. A realistic heart phantom was generated in a 3D-Printer. A CT scan was performed on the phantom. The phantom itself served as ground-truth reference to ensure exact and reproducible catheter placement. A MediGuide® catheter was repeatedly tagged at selected positions to assess accuracy of point localization. The catheter was also used to acquire a MediGuide®-scaled geometry in the EnSite Velocity® electroanatomic mapping system. The acquired geometries (MediGuide®-scaled and EnSite Velocity®-scaled) were compared to a CT segmentation of the phantom to quantify concordance. Distances between landmarks were measured in the EnSite Velocity®- and MediGuide®-scaled geometry and the CT dataset for Bland-Altman comparison. The visualization of virtual MediGuide® catheter tips was compared to their corresponding representation on fluoroscopic cine-loops. Point localization accuracy was 0.5 ± 0.3 mm for MediGuide® and 1.4 ± 0.7 mm for EnSite Velocity®. The 3D accuracy of the geometries was 1.1 ± 1.4 mm (MediGuide®-scaled) and 3.2 ± 1.6 mm (not MediGuide®-scaled). The offset between virtual MediGuide® catheter visualization and catheter representation on corresponding fluoroscopic cine-loops was 0.4 ± 0.1 mm. The MediGuide® system shows a very high level of accuracy regarding localization reproducibility as well as spatial and visual accuracy, which can be ascribed to the magnetic field localization technology. The observed offsets between the geometry visualization and the real phantom are below a clinically relevant threshold. © 2015 Wiley Periodicals, Inc.

  12. Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N.

    2004-02-01

    We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to ~10 μm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of ~50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of <1 μm the real part of the complex refractive index was retrieved to an accuracy of +/-0.05, the imaginary part was retrieved to 50% uncertainty. Simulations dealing with a mode-dependent complex refractive index showed that an average complex refractive index is derived that lies between the values for the two individual modes. Thus it becomes possible to investigate external mixtures of particle size distributions, which, for example, might be present along continental rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.

  13. Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution.

    PubMed

    Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N

    2004-02-10

    We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to approximately 10 microm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of approximately 50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of < 1 microm the real part of the complex refractive index was retrieved to an accuracy of +/- 0.05, the imaginary part was retrieved to 50% uncertainty. Simulations dealing with a mode-dependent complex refractive index showed that an average complex refractive index is derived that lies between the values for the two individual modes. Thus it becomes possible to investigate external mixtures of particle size distributions, which, for example, might be present along continental rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.

  14. Time-Resolved Intrafraction Target Translations and Rotations During Stereotactic Liver Radiation Therapy: Implications for Marker-based Localization Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertholet, Jenny, E-mail: jennbe@rm.dk; Worm, Esben S.; Fledelius, Walther

    Purpose: Image guided liver stereotactic body radiation therapy (SBRT) often relies on implanted fiducial markers. The target localization accuracy decreases with increased marker-target distance. This may occur partly because of liver rotations. The aim of this study was to examine time-resolved translations and rotations of liver marker constellations and investigate if time-resolved intrafraction rotational corrections can improve localization accuracy in liver SBRT. Methods and Materials: Twenty-nine patients with 3 implanted markers received SBRT in 3 to 6 fractions. The time-resolved trajectory of each marker was estimated from the projections of 1 to 3 daily cone beam computed tomography scans andmore » used to calculate the translation and rotation of the marker constellation. In all cone beam computed tomography projections, the time-resolved position of each marker was predicted from the position of another surrogate marker by assuming that the marker underwent either (1) the same translation as the surrogate marker; or (2) the same translation as the surrogate marker corrected by the rotation of the marker constellation. The localization accuracy was quantified as the root-mean-square error (RMSE) between the estimated and the actual marker position. For comparison, the RMSE was also calculated when the marker's position was estimated as its mean position for all the projections. Results: The mean translational and rotational range (2nd-98th percentile) was 2.0 mm/3.9° (right-left), 9.2 mm/2.9° (superior-inferior), 4.0 mm/4.0° (anterior-posterior), and 10.5 mm (3-dimensional). Rotational corrections decreased the mean 3-dimensional RMSE from 0.86 mm to 0.54 mm (P<.001) and halved the RMSE increase per millimeter increase in marker distance. Conclusions: Intrafraction rotations during liver SBRT reduce the accuracy of marker-guided target localization. Rotational correction can improve the localization accuracy with a factor of approximately 2 for large marker-target distances.« less

  15. Quantitative Description of Crystal Nucleation and Growth from in Situ Liquid Scanning Transmission Electron Microscopy.

    PubMed

    Ievlev, Anton V; Jesse, Stephen; Cochell, Thomas J; Unocic, Raymond R; Protopopescu, Vladimir A; Kalinin, Sergei V

    2015-12-22

    Recent advances in liquid cell (scanning) transmission electron microscopy (S)TEM has enabled in situ nanoscale investigations of controlled nanocrystal growth mechanisms. Here, we experimentally and quantitatively investigated the nucleation and growth mechanisms of Pt nanostructures from an aqueous solution of K2PtCl6. Averaged statistical, network, and local approaches have been used for the data analysis and the description of both collective particles dynamics and local growth features. In particular, interaction between neighboring particles has been revealed and attributed to reduction of the platinum concentration in the vicinity of the particle boundary. The local approach for solving the inverse problem showed that particles dynamics can be simulated by a stationary diffusional model. The obtained results are important for understanding nanocrystal formation and growth processes and for optimization of synthesis conditions.

  16. Particle statistics and lossy dynamics of ultracold atoms in optical lattices

    NASA Astrophysics Data System (ADS)

    Yago Malo, J.; van Nieuwenburg, E. P. L.; Fischer, M. H.; Daley, A. J.

    2018-05-01

    Experimental control over ultracold quantum gases has made it possible to investigate low-dimensional systems of both bosonic and fermionic atoms. In closed one-dimensional systems there are many similarities in the dynamics of local quantities for spinless fermions and strongly interacting "hard-core" bosons, which on a lattice can be formalized via a Jordan-Wigner transformation. In this study, we analyze the similarities and differences for spinless fermions and hard-core bosons on a lattice in the presence of particle loss. The removal of a single fermion causes differences in local quantities compared with the bosonic case because of the different particle exchange symmetry in the two cases. We identify deterministic and probabilistic signatures of these dynamics in terms of local particle density, which could be measured in ongoing experiments with quantum gas microscopes.

  17. Magnetic Luminescent Porous Silicon Microparticles for Localized Delivery of Molecular Drug Payloads

    PubMed Central

    Gu, Luo; Park, Ji-Ho; Duong, Kim H.; Ruoslahti, Erkki; Sailor, Michael J.

    2011-01-01

    Magnetic manipulation, fluorescent tracking, and localized delivery of a drug payload to cancer cells in vitro is demonstrated, using nanostructured porous silicon microparticles as a carrier. The multifunctional microparticles are prepared by electrochemical porosification of a silicon wafer in a hydrofluoric acid-containing electrolyte, followed by removal and fracture of the porous layer into particles using ultrasound. The intrinsically luminescent particles are loaded with superparamagnetic iron oxide nanoparticles and the anti-cancer drug doxorubicin. The drug-containing particles are delivered to human cervical cancer (HeLa) cells in vitro, under the guidance of a magnetic field. The high concentration of particles in the proximity of the magnetic field results in a high concentration of drug being released in that region of the Petri dish, and localized cell death is confirmed by cellular viability assay (Calcein AM). PMID:20814923

  18. Magnetic luminescent porous silicon microparticles for localized delivery of molecular drug payloads.

    PubMed

    Gu, Luo; Park, Ji-Ho; Duong, Kim H; Ruoslahti, Erkki; Sailor, Michael J

    2010-11-22

    Magnetic manipulation, fluorescent tracking, and localized delivery of a drug payload to cancer cells in vitro is demonstrated, using nanostructured porous silicon microparticles as a carrier. The multifunctional microparticles are prepared by electrochemical porosification of a silicon wafer in a hydrofluoric acid-containing electrolyte, followed by removal and fracture of the porous layer into particles using ultrasound. The intrinsically luminescent particles are loaded with superparamagnetic iron oxide nanoparticles and the anti-cancer drug doxorubicin. The drug-containing particles are delivered to human cervical cancer (HeLa) cells in vitro, under the guidance of a magnetic field. The high concentration of particles in the proximity of the magnetic field results in a high concentration of drug being released in that region of the Petri dish, and localized cell death is confirmed by cellular viability assay (Calcein AM).

  19. Insertion device and method for accurate and repeatable target insertion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gubeli, III, Joseph F.; Shinn, Michelle D.; Bevins, Michael E.

    The present invention discloses a device and a method for inserting and positioning a target within a free electron laser, particle accelerator, or other such device that generates or utilizes a beam of energy or particles. The system includes a three-point registration mechanism that insures angular and translational accuracy and repeatability of positioning upon multiple insertions within the same structure.

  20. Swarm intelligence applied to the risk evaluation for congenital heart surgery.

    PubMed

    Zapata-Impata, Brayan S; Ruiz-Fernandez, Daniel; Monsalve-Torra, Ana

    2015-01-01

    Particle Swarm Optimization is an optimization technique based on the positions of several particles created to find the best solution to a problem. In this work we analyze the accuracy of a modification of this algorithm to classify the levels of risk for a surgery, used as a treatment to correct children malformations that imply congenital heart diseases.

  1. Shock simulations of a single-site coarse-grain RDX model using the dissipative particle dynamics method with reactivity

    NASA Astrophysics Data System (ADS)

    Sellers, Michael S.; Lísal, Martin; Schweigert, Igor; Larentzos, James P.; Brennan, John K.

    2017-01-01

    In discrete particle simulations, when an atomistic model is coarse-grained, a tradeoff is made: a boost in computational speed for a reduction in accuracy. The Dissipative Particle Dynamics (DPD) methods help to recover lost accuracy of the viscous and thermal properties, while giving back a relatively small amount of computational speed. Since its initial development for polymers, one of the most notable extensions of DPD has been the introduction of chemical reactivity, called DPD-RX. In 2007, Maillet, Soulard, and Stoltz introduced implicit chemical reactivity in DPD through the concept of particle reactors and simulated the decomposition of liquid nitromethane. We present an extended and generalized version of the DPD-RX method, and have applied it to solid hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX). Demonstration simulations of reacting RDX are performed under shock conditions using a recently developed single-site coarse-grain model and a reduced RDX decomposition mechanism. A description of the methods used to simulate RDX and its transition to hot product gases within DPD-RX is presented. Additionally, we discuss several examples of the effect of shock speed and microstructure on the corresponding material chemistry.

  2. Effective Field Theory of Surface-mediated Forces in Soft Matter

    NASA Astrophysics Data System (ADS)

    Yolcu, Cem

    We propose a field theoretic formalism for describing soft surfaces modified by the presence of inclusions. Examples include particles trapped at a fluid-fluid interface, proteins attached to (or embedded in) a biological membrane, etc. We derive the energy functional for near-flat surfaces by an effective field theory approach. The two disparate length scales, particle sizes and inter-particle separations, afford the expansion parameters for controlling the accuracy of the effective theory, which is arbitrary in principle. We consider the following two surface types: (i) one where tension determines the behavior, such as a fluid-fluid interface (referred to as a film), and (ii) one where bending-elasticity dominates (referred to as a membrane). We also restrict to rigid inclusions with a circular footprint, and discuss generalizations briefly. As a result of the localized constraints imposed on the surface by the inclusions, the free energy of the system depends on their spatial arrangement, i.e. forces arise between them. Such surface-mediated interactions are believed to play an important role in the aggregation behavior of colloidal particles at interfaces and proteins on membranes. The interaction free energy consists of two parts: (i) the ground-state of the surface determined by possible deformations imposed by the particles, and (ii) the fluctuation correction. The former is analogous to classical electrostatics with the height profile of the surface playing the role of the electrostatic potential, while the latter is analogous to the Casimir effect and originates from the mere presence of constraints. We compute both interactions in truncated expansions. The efficiency of the formalism allows us to predict, with remarkable ease, quite a few orders of subleading corrections to existing results which are only valid when the inclusions are infinitely far apart. We also found that the few previous studies on finite distance corrections were incomplete. In addition to pairwise additive interactions, we compute the leading behavior of several many-body interactions, as well as subleading corrections where the leading contribution was previously calculated.

  3. Locally adapted NeQuick 2 model performance in European middle latitude ionosphere under different solar, geomagnetic and seasonal conditions

    NASA Astrophysics Data System (ADS)

    Vuković, Josip; Kos, Tomislav

    2017-10-01

    The ionosphere introduces positioning error in Global Navigation Satellite Systems (GNSS). There are several approaches for minimizing the error, with various levels of accuracy and different extents of coverage area. To model the state of the ionosphere in a region containing low number of reference GNSS stations, a locally adapted NeQuick 2 model can be used. Data ingestion updates the model with local level of ionization, enabling it to follow the observed changes of ionization levels. The NeQuick 2 model was adapted to local reference Total Electron Content (TEC) data using single station approach and evaluated using calibrated TEC data derived from 41 testing GNSS stations distributed around the data ingestion point. Its performance was observed in European middle latitudes in different ionospheric conditions of the period between 2011 and 2015. The modelling accuracy was evaluated in four azimuthal quadrants, with coverage radii calculated for three error thresholds: 12, 6 and 3 TEC Units (TECU). Diurnal radii change was observed for groups of days within periods of low and high solar activity and different seasons of the year. The statistical analysis was conducted on those groups of days, revealing trends in each of the groups, similarities between days within groups and the 95th percentile radii as a practically applicable measure of model performance. In almost all cases the modelling accuracy was better than 12 TECU, having the biggest radius from the data ingestion point. Modelling accuracy better than 6 TECU was achieved within reduced radius in all observed periods, while accuracy better than 3 TECU was reached only in summer. The calculated radii and interpolated error levels were presented on maps. That was especially useful in analyzing the model performance during the strongest geomagnetic storms of the observed period, with each of them having unique development and influence on model accuracy. Although some of the storms severely degraded the model accuracy, during most of the disturbed periods the model could be used, but with lower accuracy than in the quiet geomagnetic conditions. The comprehensive analysis of locally adapted NeQuick 2 model performance highlighted the challenges of using the single point data ingestion applied to a large region in middle latitudes and determined the achievable radii for different error thresholds in various ionospheric conditions.

  4. The role of size polydispersity in magnetic fluid hyperthermia: average vs. local infra/over-heating effects.

    PubMed

    Munoz-Menendez, Cristina; Conde-Leboran, Ivan; Baldomir, Daniel; Chubykalo-Fesenko, Oksana; Serantes, David

    2015-11-07

    An efficient and safe hyperthermia cancer treatment requires the accurate control of the heating performance of magnetic nanoparticles, which is directly related to their size. However, in any particle system the existence of some size polydispersity is experimentally unavoidable, which results in a different local heating output and consequently a different hyperthermia performance depending on the size of each particle. With the aim to shed some light on this significant issue, we have used a Monte Carlo technique to study the role of size polydispersity in heat dissipation at both the local (single particle) and global (macroscopic average) levels. We have systematically varied size polydispersity, temperature and interparticle dipolar interaction conditions, and evaluated local heating as a function of these parameters. Our results provide a simple guide on how to choose, for a given polydispersity degree, the more adequate average particle size so that the local variation in the released heat is kept within some limits that correspond to safety boundaries for the average-system hyperthermia performance. All together we believe that our results may help in the design of more effective magnetic hyperthermia applications.

  5. Thermal motion of a nonlinear localized pattern in a quasi-one-dimensional system.

    PubMed

    Dessup, Tommy; Coste, Christophe; Saint Jean, Michel

    2016-07-01

    We study the dynamics of localized nonlinear patterns in a quasi-one-dimensional many-particle system near a subcritical pitchfork bifurcation. The normal form at the bifurcation is given and we show that these patterns can be described as solitary-wave envelopes. They are stable in a large temperature range and can diffuse along the chain of interacting particles. During their displacements the particles are continually redistributed on the envelope. This change of particle location induces a small modulation of the potential energy of the system, with an amplitude that depends on the transverse confinement. At high temperature, this modulation is irrelevant and the thermal motion of the localized patterns displays all the characteristics of a free quasiparticle diffusion with a diffusion coefficient that may be deduced from the normal form. At low temperature, significant physical effects are induced by the modulated potential. In particular, the localized pattern may be trapped at very low temperature. We also exhibit a series of confinement values for which the modulation amplitudes vanishes. For these peculiar confinements, the mean-square displacement of the localized patterns also evidences free-diffusion behavior at low temperature.

  6. Microphysical particle properties derived from inversion algorithms developed in the framework of EARLINET

    NASA Astrophysics Data System (ADS)

    Müller, Detlef; Böckmann, Christine; Kolgotin, Alexei; Schneidenbach, Lars; Chemyakin, Eduard; Rosemann, Julia; Znak, Pavel; Romanov, Anton

    2016-10-01

    We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or ±50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high- and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g. the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test the robustness of the algorithms towards their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of aerosol physics. We computed the optical data from monomodal logarithmic particle size distributions, i.e. we explicitly excluded the more complicated case of bimodal particle size distributions which is a topic of ongoing research work. Another constraint is that we only considered particles of spherical shape in our simulations. We considered particle radii as large as 7-10 µm in our simulations where the Potsdam algorithm is limited to the lower value. We considered optical-data errors of 15 % in the simulation studies. We target 50 % uncertainty as a reasonable threshold for our data products, though we attempt to obtain data products with less uncertainty in future work.

  7. A Local-Realistic Model of Quantum Mechanics Based on a Discrete Spacetime

    NASA Astrophysics Data System (ADS)

    Sciarretta, Antonio

    2018-01-01

    This paper presents a realistic, stochastic, and local model that reproduces nonrelativistic quantum mechanics (QM) results without using its mathematical formulation. The proposed model only uses integer-valued quantities and operations on probabilities, in particular assuming a discrete spacetime under the form of a Euclidean lattice. Individual (spinless) particle trajectories are described as random walks. Transition probabilities are simple functions of a few quantities that are either randomly associated to the particles during their preparation, or stored in the lattice nodes they visit during the walk. QM predictions are retrieved as probability distributions of similarly-prepared ensembles of particles. The scenarios considered to assess the model comprise of free particle, constant external force, harmonic oscillator, particle in a box, the Delta potential, particle on a ring, particle on a sphere and include quantization of energy levels and angular momentum, as well as momentum entanglement.

  8. Method of phase space beam dilution utilizing bounded chaos generated by rf phase modulation

    DOE PAGES

    Pham, Alfonse N.; Lee, S. Y.; Ng, K. Y.

    2015-12-10

    This paper explores the physics of chaos in a localized phase-space region produced by rf phase modulation applied to a double rf system. The study can be exploited to produce rapid particle bunch broadening exhibiting longitudinal particle distribution uniformity. Hamiltonian models and particle-tracking simulations are introduced to understand the mechanism and applicability of controlled particle diffusion. When phase modulation is applied to the double rf system, regions of localized chaos are produced through the disruption and overlapping of parametric resonant islands and configured to be bounded by well-behaved invariant tori to prevent particle loss. The condition of chaoticity and themore » degree of particle dilution can be controlled by the rf parameters. As a result, the method has applications in alleviating adverse space-charge effects in high-intensity beams, particle bunch distribution uniformization, and industrial radiation-effects experiments.« less

  9. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    PubMed

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  10. Influence of a depletion interaction on dynamical heterogeneity in a dense quasi-two-dimensional colloid liquid.

    PubMed

    Ho, Hau My; Cui, Bianxiao; Repel, Stephen; Lin, Binhua; Rice, Stuart A

    2004-11-01

    We report the results of digital video microscopy studies of the large particle displacements in a quasi-two-dimensional binary mixture of large (L) and small (S) colloid particles with diameter ratio sigma(L)/sigma(S)=4.65, as a function of the large and small colloid particle densities. As in the case of the one-component quasi-two-dimensional colloid system, the binary mixtures exhibit structural and dynamical heterogeneity. The distribution of large particle displacements over the time scale examined provides evidence for (at least) two different mechanisms of motion, one associated with particles in locally ordered regions and the other associated with particles in locally disordered regions. When rhoL*=Npisigma(L) (2)/4A< or =0.35, the addition of small colloid particles leads to a monotonic decrease in the large particle diffusion coefficient with increasing small particle volume fraction. When rhoL* > or =0.35 the addition of small colloid particles to a dense system of large colloid particles at first leads to an increase in the large particle diffusion coefficient, which is then followed by the expected decrease of the large particle diffusion coefficient with increasing small colloid particle volume fraction. The mode coupling theory of the ideal glass transition in three-dimensional systems makes a qualitative prediction that agrees with the initial increase in the large particle diffusion coefficient with increasing small particle density. Nevertheless, because the structural and dynamical heterogeneities of the quasi-two-dimensional colloid liquid occur within the field of equilibrium states, and the fluctuations generate locally ordered domains rather than just disordered regions of higher and lower density, it is suggested that mode coupling theory does not account for all classes of relevant fluctuations in a quasi-two-dimensional liquid. (c) 2004 American Institute of Physics.

  11. Evaluation of a low-cost optical particle counter (Alphasense OPC-N2) for ambient air monitoring

    NASA Astrophysics Data System (ADS)

    Crilley, Leigh R.; Shaw, Marvin; Pound, Ryan; Kramer, Louisa J.; Price, Robin; Young, Stuart; Lewis, Alastair C.; Pope, Francis D.

    2018-02-01

    A fast-growing area of research is the development of low-cost sensors for measuring air pollutants. The affordability and size of low-cost particle sensors makes them an attractive option for use in experiments requiring a number of instruments such as high-density spatial mapping. However, for these low-cost sensors to be useful for these types of studies their accuracy and precision need to be quantified. We evaluated the Alphasense OPC-N2, a promising low-cost miniature optical particle counter, for monitoring ambient airborne particles at typical urban background sites in the UK. The precision of the OPC-N2 was assessed by co-locating 14 instruments at a site to investigate the variation in measured concentrations. Comparison to two different reference optical particle counters as well as a TEOM-FDMS enabled the accuracy of the OPC-N2 to be evaluated. Comparison of the OPC-N2 to the reference optical instruments shows some limitations for measuring mass concentrations of PM1, PM2.5 and PM10. The OPC-N2 demonstrated a significant positive artefact in measured particle mass during times of high ambient RH (> 85 %) and a calibration factor was developed based upon κ-Köhler theory, using average bulk particle aerosol hygroscopicity. Application of this RH correction factor resulted in the OPC-N2 measurements being within 33 % of the TEOM-FDMS, comparable to the agreement between a reference optical particle counter and the TEOM-FDMS (20 %). Inter-unit precision for the 14 OPC-N2 sensors of 22 ± 13 % for PM10 mass concentrations was observed. Overall, the OPC-N2 was found to accurately measure ambient airborne particle mass concentration provided they are (i) correctly calibrated and (ii) corrected for ambient RH. The level of precision demonstrated between multiple OPC-N2s suggests that they would be suitable devices for applications where the spatial variability in particle concentration was to be determined.

  12. Integration of Heterogenous Digital Surface Models

    NASA Astrophysics Data System (ADS)

    Boesch, R.; Ginzler, C.

    2011-08-01

    The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI), two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM) with 1m resolution covering whole switzerland (approx. 41000 km2). The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM). Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET) generates the image based surface model (ADS-DSM) and delivers also a map with figures of merit (FOM) of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2) has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement") uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion") an anisotropic inverse distance weighting (IDW) will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library), GDAL (Geospatial Data Abstraction Library) and OpenCV (Open Source Computer Vision).

  13. About the inevitable compromise between spatial resolution and accuracy of strain measurement for bone tissue: a 3D zero-strain study.

    PubMed

    Dall'Ara, E; Barber, D; Viceconti, M

    2014-09-22

    The accurate measurement of local strain is necessary to study bone mechanics and to validate micro computed tomography (µCT) based finite element (FE) models at the tissue scale. Digital volume correlation (DVC) has been used to provide a volumetric estimation of local strain in trabecular bone sample with a reasonable accuracy. However, nothing has been reported so far for µCT based analysis of cortical bone. The goal of this study was to evaluate accuracy and precision of a deformable registration method for prediction of local zero-strains in bovine cortical and trabecular bone samples. The accuracy and precision were analyzed by comparing scans virtually displaced, repeated scans without any repositioning of the sample in the scanner and repeated scans with repositioning of the samples. The analysis showed that both precision and accuracy errors decrease with increasing the size of the region analyzed, by following power laws. The main source of error was found to be the intrinsic noise of the images compared to the others investigated. The results, once extrapolated for larger regions of interest that are typically used in the literature, were in most cases better than the ones previously reported. For a nodal spacing equal to 50 voxels (498 µm), the accuracy and precision ranges were 425-692 µε and 202-394 µε, respectively. In conclusion, it was shown that the proposed method can be used to study the local deformation of cortical and trabecular bone loaded beyond yield, if a sufficiently high nodal spacing is used. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. PSOVina: The hybrid particle swarm optimization algorithm for protein-ligand docking.

    PubMed

    Ng, Marcus C K; Fong, Simon; Siu, Shirley W I

    2015-06-01

    Protein-ligand docking is an essential step in modern drug discovery process. The challenge here is to accurately predict and efficiently optimize the position and orientation of ligands in the binding pocket of a target protein. In this paper, we present a new method called PSOVina which combined the particle swarm optimization (PSO) algorithm with the efficient Broyden-Fletcher-Goldfarb-Shannon (BFGS) local search method adopted in AutoDock Vina to tackle the conformational search problem in docking. Using a diverse data set of 201 protein-ligand complexes from the PDBbind database and a full set of ligands and decoys for four representative targets from the directory of useful decoys (DUD) virtual screening data set, we assessed the docking performance of PSOVina in comparison to the original Vina program. Our results showed that PSOVina achieves a remarkable execution time reduction of 51-60% without compromising the prediction accuracies in the docking and virtual screening experiments. This improvement in time efficiency makes PSOVina a better choice of a docking tool in large-scale protein-ligand docking applications. Our work lays the foundation for the future development of swarm-based algorithms in molecular docking programs. PSOVina is freely available to non-commercial users at http://cbbio.cis.umac.mo .

  15. Classification of holter registers by dynamic clustering using multi-dimensional particle swarm optimization.

    PubMed

    Kiranyaz, Serkan; Ince, Turker; Pulkkinen, Jenni; Gabbouj, Moncef

    2010-01-01

    In this paper, we address dynamic clustering in high dimensional data or feature spaces as an optimization problem where multi-dimensional particle swarm optimization (MD PSO) is used to find out the true number of clusters, while fractional global best formation (FGBF) is applied to avoid local optima. Based on these techniques we then present a novel and personalized long-term ECG classification system, which addresses the problem of labeling the beats within a long-term ECG signal, known as Holter register, recorded from an individual patient. Due to the massive amount of ECG beats in a Holter register, visual inspection is quite difficult and cumbersome, if not impossible. Therefore the proposed system helps professionals to quickly and accurately diagnose any latent heart disease by examining only the representative beats (the so called master key-beats) each of which is representing a cluster of homogeneous (similar) beats. We tested the system on a benchmark database where the beats of each Holter register have been manually labeled by cardiologists. The selection of the right master key-beats is the key factor for achieving a highly accurate classification and the proposed systematic approach produced results that were consistent with the manual labels with 99.5% average accuracy, which basically shows the efficiency of the system.

  16. Enhancement of fluorescence intensity by silicon particles and its size effect.

    PubMed

    Saitow, Ken-ichi; Suemori, Hidemi; Tamamitsu, Hironori

    2014-02-04

    Fluorescence-intensity enhancement of dye molecules was investigated using silicon submicron particles as a function of the particle size. Silicon particles with a size of 500 nm gave an enhancement factor up to 180. Measurement of scattering spectra revealed that the localized electric field at the particle enhances the fluorescence intensity.

  17. A Collaborative Secure Localization Algorithm Based on Trust Model in Underwater Wireless Sensor Networks

    PubMed Central

    Han, Guangjie; Liu, Li; Jiang, Jinfang; Shu, Lei; Rodrigues, Joel J.P.C.

    2016-01-01

    Localization is one of the hottest research topics in Underwater Wireless Sensor Networks (UWSNs), since many important applications of UWSNs, e.g., event sensing, target tracking and monitoring, require location information of sensor nodes. Nowadays, a large number of localization algorithms have been proposed for UWSNs. How to improve location accuracy are well studied. However, few of them take location reliability or security into consideration. In this paper, we propose a Collaborative Secure Localization algorithm based on Trust model (CSLT) for UWSNs to ensure location security. Based on the trust model, the secure localization process can be divided into the following five sub-processes: trust evaluation of anchor nodes, initial localization of unknown nodes, trust evaluation of reference nodes, selection of reference node, and secondary localization of unknown node. Simulation results demonstrate that the proposed CSLT algorithm performs better than the compared related works in terms of location security, average localization accuracy and localization ratio. PMID:26891300

  18. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  19. SU-E-T-268: Proton Radiosurgery End-To-End Testing Using Lucy 3D QA Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, D; Gordon, I; Ghebremedhin, A

    2014-06-01

    Purpose: To check the overall accuracy of proton radiosurgery treatment delivery using ready-made circular collimator inserts and fixed thickness compensating boluses. Methods: Lucy 3D QA phantom (Standard Imaging Inc. WI, USA) inserted with GaFchromicTM film was irradiated with laterally scattered and longitudinally spread-out 126.8 MeV proton beams. The tests followed every step in the proton radiosurgery treatment delivery process: CT scan (GE Lightspeed VCT), target contouring, treatment planning (Odyssey 5.0, Optivus, CA), portal calibration, target localization using robotic couch with image guidance and dose delivery at planned gantry angles. A 2 cm diameter collimator insert in a 4 cm diametermore » radiosurgery cone and a 1.2 cm thick compensating flat bolus were used for all beams. Film dosimetry (RIT114 v5.0, Radiological Imaging Technology, CO, USA) was used to evaluate the accuracy of target localization and relative dose distributions compared to those calculated by the treatment planning system. Results: The localization accuracy was estimated by analyzing the GaFchromic films irradiated at gantry 0, 90 and 270 degrees. We observed 0.5 mm shift in lateral direction (patient left), ±0.9 mm shift in AP direction and ±1.0 mm shift in vertical direction (gantry dependent). The isodose overlays showed good agreement (<2mm, 50% isodose lines) between measured and calculated doses. Conclusion: Localization accuracy depends on gantry sag, CT resolution and distortion, DRRs from treatment planning computer, localization accuracy of image guidance system, fabrication of ready-made aperture and cone housing. The total deviation from the isocenter was 1.4 mm. Dose distribution uncertainty comes from distal end error due to bolus and CT density, in addition to localization error. The planned dose distribution was well matched (>90%) to the measured values 2%/2mm criteria. Our test showed the robustness of our proton radiosurgery treatment delivery system using ready-made collimator inserts and fixed thickness compensating boluses.« less

  20. Resolution limits of ultrafast ultrasound localization microscopy

    NASA Astrophysics Data System (ADS)

    Desailly, Yann; Pierre, Juliette; Couture, Olivier; Tanter, Mickael

    2015-11-01

    As in other imaging methods based on waves, the resolution of ultrasound imaging is limited by the wavelength. However, the diffraction-limit can be overcome by super-localizing single events from isolated sources. In recent years, we developed plane-wave ultrasound allowing frame rates up to 20 000 fps. Ultrafast processes such as rapid movement or disruption of ultrasound contrast agents (UCA) can thus be monitored, providing us with distinct punctual sources that could be localized beyond the diffraction limit. We previously showed experimentally that resolutions beyond λ/10 can be reached in ultrafast ultrasound localization microscopy (uULM) using a 128 transducer matrix in reception. Higher resolutions are theoretically achievable and the aim of this study is to predict the maximum resolution in uULM with respect to acquisition parameters (frequency, transducer geometry, sampling electronics). The accuracy of uULM is the error on the localization of a bubble, considered a point-source in a homogeneous medium. The proposed model consists in two steps: determining the timing accuracy of the microbubble echo in radiofrequency data, then transferring this time accuracy into spatial accuracy. The simplified model predicts a maximum resolution of 40 μm for a 1.75 MHz transducer matrix composed of two rows of 64 elements. Experimental confirmation of the model was performed by flowing microbubbles within a 60 μm microfluidic channel and localizing their blinking under ultrafast imaging (500 Hz frame rate). The experimental resolution, determined as the standard deviation in the positioning of the microbubbles, was predicted within 6 μm (13%) of the theoretical values and followed the analytical relationship with respect to the number of elements and depth. Understanding the underlying physical principles determining the resolution of superlocalization will allow the optimization of the imaging setup for each organ. Ultimately, accuracies better than the size of capillaries are achievable at several centimeter depths.

  1. Study of Particle Rotation Effect in Gas-Solid Flows using Direct Numerical Simulation with a Lattice Boltzmann Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang

    A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. Anmore » over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.« less

  2. Optical pulling of airborne absorbing particles and smut spores over a meter-scale distance with negative photophoretic force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jinda; Hart, Adam G.; Li, Yong-qing, E-mail: liy@ecu.edu

    2015-04-27

    We demonstrate optical pulling of single light-absorbing particles and smut spores in air over a meter-scale distance using a single collimated laser beam based on negative photophoretic force. The micron-sized particles are pulled towards the light source at a constant speed of 1–10 cm/s in the optical pulling pipeline while undergoing transverse rotation at 0.2–10 kHz. The pulled particles can be manipulated and precisely positioned on the entrance window with an accuracy of ∼20 μm, and their chemical compositions can be characterized with micro-Raman spectroscopy.

  3. Localized and delocalized motion of colloidal particles on a magnetic bubble lattice.

    PubMed

    Tierno, Pietro; Johansen, Tom H; Fischer, Thomas M

    2007-07-20

    We study the motion of paramagnetic colloidal particles placed above magnetic bubble domains of a uniaxial garnet film and driven through the lattice by external magnetic field modulation. An external tunable precessing field propels the particles either in localized orbits around the bubbles or in superdiffusive or ballistic motion through the bubble array. This motion results from the interplay between the driving rotating signal, the viscous drag force and the periodic magnetic energy landscape. We explain the transition in terms of the incommensurability between the transit frequency of the particle through a unit cell and the modulation frequency. Ballistic motion dynamically breaks the symmetry of the array and the phase locked particles follow one of the six crystal directions.

  4. Characterization of airborne and bulk particulate from iron and steel manufacturing facilities.

    PubMed

    Machemer, Steven D

    2004-01-15

    Characterization of airborne and bulk particulate material from iron and steel manufacturing facilities, commonly referred to as kish, indicated graphite flakes and graphite flakes associated with spherical iron oxide particles were unique particle characteristics useful in identifying particle emissions from iron and steel manufacturing. Characterization of airborne particulate material collected in receptor areas was consistent with multiple atmospheric release events of kish particles from the local iron and steel facilities into neighboring residential areas. Kish particles deposited in nearby residential areas included an abundance of graphite flakes, tens of micrometers to millimeters in size, and spherical iron oxide particles, submicrometer to tens of micrometers in size. Bulk kish from local iron and steel facilities contained an abundance of similar particles. Approximately 60% of blast furnace kish by volume consisted of spherical iron oxide particles in the respirable size range. Basic oxygen furnace kish contained percent levels of strongly alkaline components such as calcium hydroxide. In addition, concentrations of respirable Mn in airborne particulate in residential areas and at local iron and steel facilities were approximately 1.6 and 53 times the inhalation reference concentration of 0.05 microg/m3 for chronic inhalation exposure of Mn, respectively. Thus, airborne release of kish may pose potential respirable particulate, corrosive, or toxic hazards for human health and/or a corrosive hazard for property and the environment.

  5. Mixing state of ambient aerosols during different fog-haze pollution episodes in the Yangtze River Delta, China

    NASA Astrophysics Data System (ADS)

    Hu, Rui; Wang, Honglei; Yin, Yan; Chen, Kui; Zhu, Bin; Zhang, Zefeng; Kang, Hui; Shen, Lijuan

    2018-04-01

    The mixing state of aerosol particles were investigated using a single particle aerosol mass spectrometer (SPAMS) during a regional fog-haze episode in the Yangtze River Delta (YRD) on 16-28 Dec., 2015. The aerosols were analyzed and clustered into 12 classes: aged elemental carbon (Aged-EC), internally mixed organics and elemental carbon (ECOC), organic carbon (OC), Biomass, Amine, Ammonium, Na-K, V-rich, Pb-rich, Cu-rich, Fe-rich and Dust. Results showed that particles in short-term rainfalls mixed with more nitrate and oxidized organics, while they mixed with more ammonium and sulfate in long-term rainfall. Due to anthropogenic activities, stronger winds and solar radiation, the particle counts increased and the size ranges of particles broadened in haze. Carbonaceous particles and Na-K mixed with enhanced secondary species during haze, and obviously were more acidic, especially for the ones with a size range of 0.6-1.2 μm. For local and long-range transported pollution, OC had distinct size distributions while the changes of ECOC were uniform. The secondary formation of ECOC contributed significantly in local pollution and affected much smaller particles (as small as 0.5 μm) in long-range transported pollution. And long-range transported pollution was more helpful for the growth of OC. Particles mixed with more chloride and nitrate/sulfate in local/long-range transported pollution.

  6. Multi-Autonomous Ground-robotic International Challenge (MAGIC) 2010

    DTIC Science & Technology

    2010-12-14

    SLAM technique since this setup, having a LIDAR with long-range high-accuracy measurement capability, allows accurate localization and mapping more...achieve the accuracy of 25cm due to the use of multi-dimensional information. OGM is, similarly to SLAM , carried out by using LIDAR data. The OGM...a result of the development and implementation of the hybrid feature-based/scan-matching Simultaneous Localization and Mapping ( SLAM ) technique, the

  7. Stochastically gated local and occupation times of a Brownian particle

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-01-01

    We generalize the Feynman-Kac formula to analyze the local and occupation times of a Brownian particle moving in a stochastically gated one-dimensional domain. (i) The gated local time is defined as the amount of time spent by the particle in the neighborhood of a point in space where there is some target that only receives resources from (or detects) the particle when the gate is open; the target does not interfere with the motion of the Brownian particle. (ii) The gated occupation time is defined as the amount of time spent by the particle in the positive half of the real line, given that it can only cross the origin when a gate placed at the origin is open; in the closed state the particle is reflected. In both scenarios, the gate randomly switches between the open and closed states according to a two-state Markov process. We derive a stochastic, backward Fokker-Planck equation (FPE) for the moment-generating function of the two types of gated Brownian functional, given a particular realization of the stochastic gate, and analyze the resulting stochastic FPE using a moments method recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment-generating function, averaged with respect to realizations of the stochastic gate.

  8. Effect of typhoon on atmospheric aerosol particle pollutants accumulation over Xiamen, China.

    PubMed

    Yan, Jinpei; Chen, Liqi; Lin, Qi; Zhao, Shuhui; Zhang, Miming

    2016-09-01

    Great influence of typhoon on air quality has been confirmed, however, rare data especially high time resolved aerosol particle data could be used to establish the behavior of typhoon on air pollution. A single particle aerosol spectrometer (SPAMS) was employed to characterize the particles with particle number count in high time resolution for two typhoons of Soulik (2013) and Soudelor (2015) with similar tracks. Three periods with five events were classified during the whole observation time, including pre - typhoon (event 1 and event 2), typhoon (event 3 and event 4) and post - typhoon (event 5) based on the meteorological parameters and particle pollutant properties. First pollutant group appeared during pre-typhoon (event 2) with high relative contributions of V - Ni rich particles. Pollution from the ship emissions and accumulated by local processes with stagnant meteorological atmosphere dominated the formation of the pollutant group before typhoon. The second pollutant group was present during typhoon (event 3), while typhoon began to change the local wind direction and increase wind speed. Particle number count reached up to the maximum value. High relative contributions of V - Ni rich and dust particles with low value of NO3(-)/SO4(2-) was observed during this period, indicating that the pollutant group was governed by the combined effect of local pollutant emissions and long-term transports. The analysis of this study sheds a deep insight into understand the relationship between the air pollution and typhoon. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Physics and biophysics experiments needed for improved risk assessment in space

    NASA Astrophysics Data System (ADS)

    Sihver, L.

    To improve the risk assessment of radiation carcinogenesis, late degenerative tissue effects, acute syndromes, synergistic effects of radiation and microgravity or other spacecraft factors, and hereditary effects, on future LEO and interplanetary space missions, the radiobiological effects of cosmic radiation before and after shielding must be well understood. However, cosmic radiation is very complex and includes low and high LET components of many different neutral and charged particles. The understanding of the radiobiology of the heavy ions, from GCRs and SPEs, is still a subject of great concern due to the complicated dependence of their biological effects on the type of ion and energy, and its interaction with various targets both outside and within the spacecraft and the human body. In order to estimate the biological effects of cosmic radiation, accurate knowledge of the physics of the interactions of both charged and non-charged high-LET particles is necessary. Since it is practically impossible to measure all primary and secondary particles from all projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes might be a helpful instrument to overcome those difficulties. These codes have to be carefully validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground-based accelerator experiments are needed. In this paper current and future physics and biophysics experiments needed for improved risk assessment in space will be discussed. The cyclotron HIRFL (heavy ion research facility in Lanzhou) and the new synchrotron CSR (cooling storage ring), which can be used to provide ion beams for space related experiments at the Institute of Modern Physics, Chinese Academy of Sciences (IMP-CAS), will be presented together with the physical and biomedical research performed at IMP-CAS.

  10. Software-type Wave-Particle Interaction Analyzer on board the Arase satellite

    NASA Astrophysics Data System (ADS)

    Katoh, Yuto; Kojima, Hirotsugu; Hikishima, Mitsuru; Takashima, Takeshi; Asamura, Kazushi; Miyoshi, Yoshizumi; Kasahara, Yoshiya; Kasahara, Satoshi; Mitani, Takefumi; Higashio, Nana; Matsuoka, Ayako; Ozaki, Mitsunori; Yagitani, Satoshi; Yokota, Shoichiro; Matsuda, Shoya; Kitahara, Masahiro; Shinohara, Iku

    2018-01-01

    We describe the principles of the Wave-Particle Interaction Analyzer (WPIA) and the implementation of the Software-type WPIA (S-WPIA) on the Arase satellite. The WPIA is a new type of instrument for the direct and quantitative measurement of wave-particle interactions. The S-WPIA is installed on the Arase satellite as a software function running on the mission data processor. The S-WPIA on board the Arase satellite uses an electromagnetic field waveform that is measured by the waveform capture receiver of the plasma wave experiment (PWE), and the velocity vectors of electrons detected by the medium-energy particle experiment-electron analyzer (MEP-e), the high-energy electron experiment (HEP), and the extremely high-energy electron experiment (XEP). The prime objective of the S-WPIA is to measure the energy exchange between whistler-mode chorus emissions and energetic electrons in the inner magnetosphere. It is essential for the S-WPIA to synchronize instruments to a relative time accuracy better than the time period of the plasma wave oscillations. Since the typical frequency of chorus emissions in the inner magnetosphere is a few kHz, a relative time accuracy of better than 10 μs is required in order to measure the relative phase angle between the wave and velocity vectors. In the Arase satellite, a dedicated system has been developed to realize the time resolution required for inter-instrument communication. Here, both the time index distributed over all instruments through the satellite system and an S-WPIA clock signal are used, that are distributed from the PWE to the MEP-e, HEP, and XEP through a direct line, for the synchronization of instruments within a relative time accuracy of a few μs. We also estimate the number of particles required to obtain statistically significant results with the S-WPIA and the expected accumulation time by referring to the specifications of the MEP-e and assuming a count rate for each detector.

  11. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  12. Colloidal Bandpass and Bandgap Filters

    NASA Astrophysics Data System (ADS)

    Yellen, Benjamin; Tahir, Mukarram; Ouyang, Yuyu; Nori, Franco

    2013-03-01

    Thermally or deterministically-driven transport of objects through asymmetric potential energy landscapes (ratchet-based motion) is of considerable interest as models for biological transport and as methods for controlling the flow of information, material, and energy. Here, we provide a general framework for implementing a colloidal bandpass filter, in which particles of a specific size range can be selectively transported through a periodic lattice, whereas larger or smaller particles are dynamically trapped in closed-orbits. Our approach is based on quasi-static (adiabatic) transition in a tunable potential energy landscape composed of a multi-frequency magnetic field input signal with the static field of a spatially-periodic magnetization. By tuning the phase shifts between the input signal and the relative forcing coefficients, large-sized particles may experience no local energy barriers, medium-sized particles experience only one local energy barrier, and small-sized particles experience two local energy barriers. The odd symmetry present in this system can be used to nudge the medium-sized particles along an open pathway, whereas the large or small beads remain trapped in a closed-orbit, leading to a bandpass filter, and vice versa for a bandgap filter. NSF CMMI - 0800173, Youth 100 Scholars Fund

  13. Dynamics of many-body localization in the presence of particle loss

    NASA Astrophysics Data System (ADS)

    van Nieuwenburg, EPL; Yago Malo, J.; Daley, AJ; Fischer, MH

    2018-01-01

    At long times, residual couplings to the environment become relevant even in the most isolated experiments, a crucial difficulty for the study of fundamental aspects of many-body dynamics. A particular example is many-body localization in a cold-atom setting, where incoherent photon scattering introduces both dephasing and particle loss. Whereas dephasing has been studied in detail and is known to destroy localization already on the level of non-interacting particles, the effect of particle loss is less well understood. A difficulty arises due to the ‘non-local’ nature of the loss process, complicating standard numerical tools using matrix product decomposition. Utilizing symmetries of the Lindbladian dynamics, we investigate the particle loss on both the dynamics of observables, as well as the structure of the density matrix and the individual states. We find that particle loss in the presence of interactions leads to dissipation and a strong suppression of the (operator space) entanglement entropy. Our approach allows for the study of the interplay of dephasing and loss for pure and mixed initial states to long times, which is important for future experiments using controlled coupling of the environment.

  14. Location estimation in wireless sensor networks using spring-relaxation technique.

    PubMed

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  15. SU-E-J-37: Feasibility of Utilizing Carbon Fiducials to Increase Localization Accuracy of Lumpectomy Cavity for Partial Breast Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y; Hieken, T; Mutter, R

    2015-06-15

    Purpose To investigate the feasibility of utilizing carbon fiducials to increase localization accuracy of lumpectomy cavity for partial breast irradiation (PBI). Methods Carbon fiducials were placed intraoperatively in the lumpectomy cavity following resection of breast cancer in 11 patients. The patients were scheduled to receive whole breast irradiation (WBI) with a boost or 3D-conformal PBI. WBI patients were initially setup to skin tattoos using lasers, followed by orthogonal kV on-board-imaging (OBI) matching to bone per clinical practice. Cone beam CT (CBCT) was acquired weekly for offline review. For the boost component of WBI and PBI, patients were setup with lasers,more » followed by OBI matching to fiducials, with final alignment by CBCT matching to fiducials. Using carbon fiducials as a surrogate for the lumpectomy cavity and CBCT matching to fiducials as the gold standard, setup uncertainties to lasers, OBI bone, OBI fiducials, and CBCT breast were compared. Results Minimal imaging artifacts were introduced by fiducials on the planning CT and CBCT. The fiducials were sufficiently visible on OBI for online localization. The mean magnitude and standard deviation of setup errors were 8.4mm ± 5.3 mm (n=84), 7.3mm ± 3.7mm (n=87), 2.2mm ± 1.6mm (n=40) and 4.8mm ± 2.6mm (n=87), for lasers, OBI bone, OBI fiducials and CBCT breast tissue, respectively. Significant migration occurred in one of 39 implanted fiducials in a patient with a large postoperative seroma. Conclusion OBI carbon fiducial-based setup can improve localization accuracy with minimal imaging artifacts. With increased localization accuracy, setup uncertainties can be reduced from 8mm using OBI bone matching to 3mm using OBI fiducial matching for PBI treatment. This work demonstrates the feasibility of utilizing carbon fiducials to increase localization accuracy to the lumpectomy cavity for PBI. This may be particularly attractive for localization in the setting of proton therapy and other scenarios in which metal clips are contraindicated.« less

  16. Comparison of Pelvic Phased-Array versus Endorectal Coil Magnetic Resonance Imaging at 3 Tesla for Local Staging of Prostate Cancer

    PubMed Central

    Kim, Bum Soo; Kim, Tae-Hwan; Kwon, Tae Gyun

    2012-01-01

    Purpose Several studies have demonstrated the superiority of endorectal coil magnetic resonance imaging (MRI) over pelvic phased-array coil MRI at 1.5 Tesla for local staging of prostate cancer. However, few have studied which evaluation is more accurate at 3 Tesla MRI. In this study, we compared the accuracy of local staging of prostate cancer using pelvic phased-array coil or endorectal coil MRI at 3 Tesla. Materials and Methods Between January 2005 and May 2010, 151 patients underwent radical prostatectomy. All patients were evaluated with either pelvic phased-array coil or endorectal coil prostate MRI prior to surgery (63 endorectal coils and 88 pelvic phased-array coils). Tumor stage based on MRI was compared with pathologic stage. We calculated the specificity, sensitivity and accuracy of each group in the evaluation of extracapsular extension and seminal vesicle invasion. Results Both endorectal coil and pelvic phased-array coil MRI achieved high specificity, low sensitivity and moderate accuracy for the detection of extracapsular extension and seminal vesicle invasion. There were statistically no differences in specificity, sensitivity and accuracy between the two groups. Conclusion Overall staging accuracy, sensitivity and specificity were not significantly different between endorectal coil and pelvic phased-array coil MRI. PMID:22476999

  17. Effect of EEG electrode density on dipole localization accuracy using two realistically shaped skull resistivity models.

    PubMed

    Laarne, P H; Tenhunen-Eskelinen, M L; Hyttinen, J K; Eskola, H J

    2000-01-01

    The effect of number of EEG electrodes on the dipole localization was studied by comparing the results obtained using the 10-20 and 10-10 electrode systems. Two anatomically detailed models with resistivity values of 177.6 omega m and 67.0 omega m for the skull were applied. Simulated potential values generated by current dipoles were applied to different combinations of the volume conductors and electrode systems. High and low resistivity models differed slightly in favour of the lower skull resistivity model when dipole localization was based on noiseless data. The localization errors were approximately three times larger using low resistivity model for generating the potentials, but applying high resistivity model for the inverse solution. The difference between the two electrode systems was minor in favour of the 10-10 electrode system when simulated, noiseless potentials were used. In the presence of noise the dipole localization algorithm operated more accurately using the denser electrode system. In conclusion, increasing the number of recording electrodes seems to improve the localization accuracy in the presence of noise. The absolute skull resistivity value also affects the accuracy, but using an incorrect value in modelling calculations seems to be the most serious source of error.

  18. Generalized localization model of relaxation in glass-forming liquids

    PubMed Central

    Cicerone, Marcus T.; Zhong, Qin; Tyagi, Madhusudan

    2012-01-01

    Glassy solidification is characterized by two essential phenomena: localization of the solidifying material’s constituent particles and a precipitous increase in its structural relaxation time τ. Determining how these two phenomena relate is key to understanding glass formation. Leporini and coworkers have recently argued that τ universally depends on a localization length-scale (the Debye-Waller factor) in a way that depends only upon the value of at the glass transition. Here we find that this ‘universal’ model does not accurately describe τ in several simulated and experimental glass-forming materials. We develop a new localization model of solidification, building upon the classical Hall-Wolynes and free volume models of glass formation, that accurately relates τ to in all systems considered. This new relationship is based on a consideration of the the anisotropic nature of particle localization. The model also indicates the presence of a particle delocalization transition at high temperatures associated with the onset of glass formation. PMID:23393495

  19. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  20. Method for facilitating the introduction of material into cells

    DOEpatents

    Holcomb, David E.; McKnight, Timothy E.

    2000-01-01

    The present invention is a method for creating a localized disruption within a boundary of a cell or structure by exposing a boundary of a cell or structure to a set of energetically charged particles while regulating the energy of the charged particles so that the charged particles have an amount of kinetic energy sufficient to create a localized disruption within an area of the boundary of the cell or structure, then upon creation of the localized disruption, the amount of kinetic energy decreases to an amount insufficient to create further damage within the cell or structure beyond the boundary. The present invention is also a method for facilitating the introduction of a material into a cell or structure using the same methodology then further exciting the area of the boundary of the cell or structure where the localized disruption was created so to create a localized temporary opening within the boundary then further introducing the material through the temporary opening into the cell or structure.

  1. Effects of ULF waves on local and global energetic particles: Particle energy and species dependences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L. Y.; Yu, J.; Cao, J. B.

    After 06:13 UT on 24 August 2005, an interplanetary shock triggers large-amplitude ultralow-frequency (ULF) waves (|δB| ≥ 15 nT) in the Pc4–Pc5 wave band (1.6–9 mHz) near the noon geosynchronous orbit (6.6 RE). The local and global effects of ULF waves on energetic particles are observed by five Los Alamos National Laboratory satellites at different magnetic local times. The large-amplitude ULF waves cause the synchronous oscillations of energetic electrons and protons (≥75 keV) at the noon geosynchronous orbit. When the energetic particles have a negative phase space density radial gradient, they undergo rapid outward radial diffusion and loss in themore » wave activity region. In the particle drift paths without strong ULF waves, only the rapidly drifting energetic electrons (≥225 keV) display energy-dispersive oscillations and flux decays, whereas the slowly drifting electrons (<225 keV) and protons (75–400 keV) have no ULF oscillation and loss feature. When the dayside magnetopause is compressed to the geosynchronous orbit, most of energetic electrons and protons are rapidly lost because of open drift trajectories. Furthermore, the global and multicomposition particle measurements demonstrate that the effect of ULF waves on nonlocal particle flux depends on the particle energy and species, whereas magnetopause shadowing effect is independent of the energetic particle species. For the rapidly drifting outer radiation belt particles (≥225 keV), nonlocal particle loss/acceleration processes could also change their fluxes in the entire drift trajectory in the absence of “ Dst effect” and substorm injection.« less

  2. Effects of ULF waves on local and global energetic particles: Particle energy and species dependences

    DOE PAGES

    Li, L. Y.; Yu, J.; Cao, J. B.; ...

    2016-11-05

    After 06:13 UT on 24 August 2005, an interplanetary shock triggers large-amplitude ultralow-frequency (ULF) waves (|δB| ≥ 15 nT) in the Pc4–Pc5 wave band (1.6–9 mHz) near the noon geosynchronous orbit (6.6 RE). The local and global effects of ULF waves on energetic particles are observed by five Los Alamos National Laboratory satellites at different magnetic local times. The large-amplitude ULF waves cause the synchronous oscillations of energetic electrons and protons (≥75 keV) at the noon geosynchronous orbit. When the energetic particles have a negative phase space density radial gradient, they undergo rapid outward radial diffusion and loss in themore » wave activity region. In the particle drift paths without strong ULF waves, only the rapidly drifting energetic electrons (≥225 keV) display energy-dispersive oscillations and flux decays, whereas the slowly drifting electrons (<225 keV) and protons (75–400 keV) have no ULF oscillation and loss feature. When the dayside magnetopause is compressed to the geosynchronous orbit, most of energetic electrons and protons are rapidly lost because of open drift trajectories. Furthermore, the global and multicomposition particle measurements demonstrate that the effect of ULF waves on nonlocal particle flux depends on the particle energy and species, whereas magnetopause shadowing effect is independent of the energetic particle species. For the rapidly drifting outer radiation belt particles (≥225 keV), nonlocal particle loss/acceleration processes could also change their fluxes in the entire drift trajectory in the absence of “ Dst effect” and substorm injection.« less

  3. Continuous-flow trapping and localized enrichment of micro- and nano-particles using induced-charge electrokinetics.

    PubMed

    Zhao, Cunlu; Yang, Chun

    2018-02-14

    In this work, we report an effective microfluidic technique for continuous-flow trapping and localized enrichment of micro- and nano-particles by using induced-charge electrokinetic (ICEK) phenomena. The proposed technique utilizes a simple microfluidic device that consists of a straight microchannel and a conducting strip attached to the bottom wall of the microchannel. Upon application of the electric field along the microchannel, the conducting strip becomes polarized to introduce two types of ICEK phenomena, the ICEK flow vortex and particle dielectrophoresis, and they are identified by a theoretical model formulated in this study to be jointly responsible for the trapping of particles over the edge of the conducting strip. Our experiments showed that successful trapping requires an AC/DC combined electric field: the DC component is mainly to induce electroosmotic flow for transporting particles to the trapping location; the AC component induces ICEK phenomena over the edge of the conducting strip for particle trapping. The performance of the technique is examined with respect to the applied electric voltage, AC frequency and the particle size. We observed that the trapped particles form a narrow band (nearly a straight line) defined by the edge of the conducting strip, thereby allowing localized particle enrichment. For instance, we found that under certain conditions a high particle enrichment ratio of 200 was achieved within 30 seconds. We also demonstrated that the proposed technique was able to trap particles from several microns down to several tens of nanometer. We believe that the proposed ICEK trapping would have great flexibility that the trapping location can be readily varied by controlling the location of the patterned conducting strip and multiple-location trapping can be expected with the use of multiple conducting strips.

  4. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    NASA Astrophysics Data System (ADS)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  5. One-dimensional soil temperature assimilation experiment based on unscented particle filter and Common Land Model

    NASA Astrophysics Data System (ADS)

    Fu, Xiao Lei; Jin, Bao Ming; Jiang, Xiao Lei; Chen, Cheng

    2018-06-01

    Data assimilation is an efficient way to improve the simulation/prediction accuracy in many fields of geosciences especially in meteorological and hydrological applications. This study takes unscented particle filter (UPF) as an example to test its performance at different two probability distribution, Gaussian and Uniform distributions with two different assimilation frequencies experiments (1) assimilating hourly in situ soil surface temperature, (2) assimilating the original Moderate Resolution Imaging Spectroradiometer (MODIS) Land Surface Temperature (LST) once per day. The numerical experiment results show that the filter performs better when increasing the assimilation frequency. In addition, UPF is efficient for improving the soil variables (e.g., soil temperature) simulation/prediction accuracy, though it is not sensitive to the probability distribution for observation error in soil temperature assimilation.

  6. Asymptotic Solutions for Optical Properties of Large Particles with Strong Absorption

    NASA Technical Reports Server (NTRS)

    Yang, Ping; Gao, Bo-Cai; Baum, Bryan A.; Hu, Yong X.; Wiscombe, Warren J.; Mishchenko, Michael I.; Winker, Dave M.; Nasiri, Shaima L.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    For scattering calculations involving nonspherical particles such as ice crystals, we show that the transverse wave condition is not applicable to the refracted electromagnetic wave in the context of geometric optics when absorption is involved. Either the TM wave condition (i.e., where the magnetic field of the refracted wave is transverse with respect to the wave direction) or the TE wave condition (i.e., where the electric field is transverse with respect to the propagating direction of the wave) may be assumed for the refracted wave in an absorbing medium to locally satisfy the electromagnetic boundary condition in the ray tracing calculation. The wave mode assumed for the refracted wave affects both the reflection and refraction coefficients. As a result, a nonunique solution for these coefficients is derived from the electromagnetic boundary condition. In this study we have identified the appropriate solution for the Fresnel reflection/refraction coefficients in light scattering calculation based on the ray tracing technique. We present the 3 x 2 refraction or transmission matrix that completely accounts for the inhomogeneity of the refracted wave in an absorbing medium. Using the Fresnel coefficients for an absorbing medium, we derive an asymptotic solution in an analytical format for the scattering properties of a general polyhedral particle. Numerical results are presented for hexagonal plates and columns with both preferred and random orientations. The asymptotic theory can produce reasonable accuracy in the phase function calculations in the infrared window region (wavelengths near 10 micron) if the particle size (in diameter) is on the order of 40 micron or larger. However, since strong absorption is assumed in the computation of the single-scattering albedo in the asymptotic theory, the single scattering albedo does not change with variation of the particle size. As a result, the asymptotic theory can lead to substantial errors in the computation of single-scattering albedo for small and moderate particle sizes. However, from comparison of the asymptotic results with the FDTD solution, it is expected that a convergence between the FDTD results and the asymptotic theory results can be reached when the particle size approaches 200 micron. We show that the phase function at side-scattering and backscattering angles is insensitive to particle shape if the random orientation condition is assumed. However, if preferred orientations are assumed for particles, the phase function has a strong dependence on scattering azimuthal angle. The single-scattering albedo also shows very strong dependence on the inclination angle of incident radiation with respect to the rotating axis for the preferred particle orientations.

  7. Lattice Boltzmann simulation of shear-induced particle migration in plane Couette-Poiseuille flow: Local ordering of suspension

    NASA Astrophysics Data System (ADS)

    Chun, Byoungjin; Kwon, Ilyoung; Jung, Hyun Wook; Hyun, Jae Chun

    2017-12-01

    The shear-induced migration of concentrated non-Brownian monodisperse suspensions in combined plane Couette-Poiseuille (C-P) flows is studied using a lattice Boltzmann simulation. The simulations are mainly performed for a particle volume fraction of ϕbulk = 0.4 and H/a = 44.3, 23.3, where H and a denote the channel height and radius of suspended particles, respectively. The simulation method is validated in two simple flows, plane Poiseuille and plane Couette flows. In the Poiseuille flow, particles migrate to the mid-plane of the channel where the local concentration is close to the limit of random-close-packing, and a random structure is also observed at the plane. In the Couette flow, the particle distribution remains in the initial uniform distribution. In the combined C-P flows, the behaviors of migration are categorized into three groups, namely, Poiseuille-dominant, Couette-dominant, and intermediate regimes, based on the value of a characteristic force, G, where G denotes the relative magnitude of the body force (P) against the wall-driving force (C). With respect to the Poiseuille-dominant regime, the location of the maximum concentration is shifted from the mid-plane to the lower wall moving in the same direction as the external body force, when G decreases. With respect to the Couette-dominant regime, the behavior is similar to that of a simple shear flow with the exception that a slightly higher concentration of particles is observed near the lower wall. However, with respect to the intermediate value of G, several layers of highly ordered particles are unexpectedly observed near the lower wall where the plane of maximum concentration is located. The locally ordered structure is mainly due to the lateral migration of particles and wall confinement. The suspended particles migrate toward a vanishingly small shear rate at the wall, and they are consequently layered into highly ordered two-dimensional structures at the high local volume fraction.

  8. Summer aerosol particle mixing in different climate and source regions of the United Arab Emirates (UAE)

    NASA Astrophysics Data System (ADS)

    Semeniuk, T. A.; Bruintjes, R. T.; Salazar, V.; Breed, D. W.; Jensen, T. L.; Buseck, P. R.

    2005-12-01

    The high aerosol loadings over the UAE reflect local to regional natural and anthropogenic pollution sources. To understand the impact of the high levels of pollution on both local and global climate systems, aerosol characterization flights in summer 2002 were used to sample major source areas, and to provide information on the interaction of aerosol particles within different geographic regions of the UAE. Atmospheric information and aerosol samples were collected from the marine/oil-industry region, NW coastal industries and cities, Oman Mountain Range, and NE coastal region. Aerosol samples were collected with multi-stage impactors and were analysed later using transmission electron microscopy. All samples are dominated by mineral grains or mineral aggregates in the coarse-mode fraction, and ammonium sulfate droplets in the fine-mode fraction. Differences in the types of mineral grains (different regional desert sources), inorganic salt and soot fractions, and types of internally mixed particles occur between regions. Oil-related industry sites have an abundance of coated and internally mixed particles, including sulfate-coated mineral grains, and mineral aggregates with chloride and sulfate. Cities have slightly elevated soot fractions, and typically have metal oxides. The NE coastal area is characterized by high soot fractions (local shipping) and mixed volatile droplets (regional Asian pollution). Particle populations within the convection zone over the Oman Mountain Range comprise an external mixture of particles from NW and NE sources, with many deliquesced particles. Both land-sea breezes in the NW regions and convection systems in the mountains mix aerosol particles from different local and regional sources, resulting in the formation of abundant internally mixed particles. The interaction between desert dust and anthropogenic pollution, and in particular the formation of mineral aggregates with chloride and sulfate, enhances the coarse-mode fraction and droplet fraction in industrial and mountainous regions.

  9. Multi-object model-based multi-atlas segmentation for rodent brains using dense discrete correspondences

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Kim, Sun Hyung; Styner, Martin

    2016-03-01

    The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.

  10. Effects of early focal brain injury on memory for visuospatial patterns: selective deficits of global-local processing.

    PubMed

    Stiles, Joan; Stern, Catherine; Appelbaum, Mark; Nass, Ruth; Trauner, Doris; Hesselink, John

    2008-01-01

    Selective deficits in visuospatial processing are present early in development among children with perinatal focal brain lesions (PL). Children with right hemisphere PL (RPL) are impaired in configural processing, while children with left hemisphere PL (LPL) are impaired in featural processing. Deficits associated with LPL are less pervasive than those observed with RPL, but this difference may reflect the structure of the tasks used for assessment. Many of the tasks used to date may place greater demands on configural processing, thus highlighting this deficit in the RPL group. This study employed a task designed to place comparable demands on configural and featural processing, providing the opportunity to obtain within-task evidence of differential deficit. Sixty-two 5- to 14-year-old children (19 RPL, 19 LPL, and 24 matched controls) reproduced from memory a series of hierarchical forms (large forms composed of small forms). Global- and local-level reproduction accuracy was scored. Controls were equally accurate on global- and local-level reproduction. Children with RPL were selectively impaired on global accuracy, and children with LPL on local accuracy, thus documenting a double dissociation in global-local processing.

  11. Local random configuration-tree theory for string repetition and facilitated dynamics of glass

    NASA Astrophysics Data System (ADS)

    Lam, Chi-Hang

    2018-02-01

    We derive a microscopic theory of glassy dynamics based on the transport of voids by micro-string motions, each of which involves particles arranged in a line hopping simultaneously displacing one another. Disorder is modeled by a random energy landscape quenched in the configuration space of distinguishable particles, but transient in the physical space as expected for glassy fluids. We study the evolution of local regions with m coupled voids. At a low temperature, energetically accessible local particle configurations can be organized into a random tree with nodes and edges denoting configurations and micro-string propagations respectively. Such trees defined in the configuration space naturally describe systems defined in two- or three-dimensional physical space. A micro-string propagation initiated by a void can facilitate similar motions by other voids via perturbing the random energy landscape, realizing path interactions between voids or equivalently string interactions. We obtain explicit expressions of the particle diffusion coefficient and a particle return probability. Under our approximation, as temperature decreases, random trees of energetically accessible configurations exhibit a sequence of percolation transitions in the configuration space, with local regions containing fewer coupled voids entering the non-percolating immobile phase first. Dynamics is dominated by coupled voids of an optimal group size, which increases as temperature decreases. Comparison with a distinguishable-particle lattice model (DPLM) of glass shows very good quantitative agreements using only two adjustable parameters related to typical energy fluctuations and the interaction range of the micro-strings.

  12. About improving efficiency of the P3 M algorithms when computing the inter-particle forces in beam dynamics

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2017-03-01

    In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix "pattern" is introduced to describe the spatial field distribution of a single point charge, so the "pattern" contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.

  13. New density estimation methods for charged particle beams with applications to microbunching instability

    NASA Astrophysics Data System (ADS)

    Terzić, Balša; Bassi, Gabriele

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzić, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  14. Towards improved magnetic fluid hyperthermia: major-loops to diminish variations in local heating.

    PubMed

    Munoz-Menendez, Cristina; Serantes, David; Ruso, Juan M; Baldomir, Daniel

    2017-06-07

    In the context of using magnetic nanoparticles for heat-mediated applications, the need of an accurate knowledge of the local (at the nanoparticle level) heat generation in addition to the usually studied global counterpart has been recently highlighted. Such a need requires accurate knowledge of the links among the intrinsic particle properties, system characteristics and experimental conditions. In this work we have investigated the role of the particles' anisotropy polydispersity in relation to the amplitude (H max ) of the AC magnetic field using a Monte Carlo technique. Our results indicate that it is better to use particles with large anisotropy for enhancing global heating, whereas for achieving homogeneous local heating it is better to use lower anisotropy particles. The latter ensures that most of the system undergoes major-loop hysteresis conditions, which is the key-point. This is equivalent to say that low-anisotropy particles (i.e. with less heating capability) may be better for accurate heat-mediated applications, which goes against some research trends in the literature that seek for large anisotropy (and hence heating) values.

  15. Current reversals and metastable states in the infinite Bose-Hubbard chain with local particle loss

    NASA Astrophysics Data System (ADS)

    Kiefer-Emmanouilidis, M.; Sirker, J.

    2017-12-01

    We present an algorithm which combines the quantum trajectory approach to open quantum systems with a density-matrix renormalization-group scheme for infinite one-dimensional lattice systems. We apply this method to investigate the long-time dynamics in the Bose-Hubbard model with local particle loss starting from a Mott-insulating initial state with one boson per site. While the short-time dynamics can be described even quantitatively by an equation of motion (EOM) approach at the mean-field level, many-body interactions lead to unexpected effects at intermediate and long times: local particle currents far away from the dissipative site start to reverse direction ultimately leading to a metastable state with a total particle current pointing away from the lossy site. An alternative EOM approach based on an effective fermion model shows that the reversal of currents can be understood qualitatively by the creation of holon-doublon pairs at the edge of the region of reduced particle density. The doublons are then able to escape while the holes move towards the dissipative site, a process reminiscent—in a loose sense—of Hawking radiation.

  16. 3D/4D analyses of damage and fracture behaviours in structural materials via synchrotron X-ray tomography.

    PubMed

    Toda, Hiroyuki

    2014-11-01

    X-ray microtomography has been utilized for the in-situ observation of various structural metals under external loading. Recent advances in X-ray microtomography provide remarkable tools to image the interior of materials. In-situ X-ray microtomography provides a unique possibility to access the 3D character of internal microstructure and its time evolution behaviours non-destructively, thereby enabling advanced techniques for measuring local strain distribution. Local strain mapping is readily enabled by processing such high-resolution tomographic images either by the particle tracking technique or the digital image correlation technique [1]. Procedures for tracking microstructural features which have been developed by the authors [2], have been applied to analyse localised deformation and damage evolution in a material [3]. Typically several tens of thousands of microstructural features, such as particles and pores, are tracked in a tomographic specimen (0.2 - 0.3 mm(3) in volume). When a sufficient number of microstructural features is dispersed in 3D space, the Delaunay tessellation algorithm is used to obtain local strain distribution. With these techniques, 3D strain fields can be measured with reasonable accuracy. Even local crack driving forces, such as local variations in the stress intensity factor, crack tip opening displacement and J integral along a crack front line, can be measured from discrete crack tip displacement fields [4]. In the present presentation, complicated crack initiation and growth behaviour and the extensive formation of micro cracks ahead of a crack tip are introduced as examples.A novel experimental method has recently been developed by amalgamating a pencil beam X-Ray diffraction (XRD) technique with the microstructural tracking technique [5]. The technique provides information about individual grain orientations and 1-micron-level grain morphologies in 3D together with high-density local strain mapping. The application of this technique to the deformation behavior of a polycrystalline aluminium alloy will be demonstrated in the presentation [6].The synchrotron-based microtomography has been mainly utilized to light materials due to their good X-ray transmission. In the present talk, the application of the synchrotron-based microtomography to steels will be also introduced. Degradation of contrast and spatial resolution due to forward scattering could be avoided by selecting appropriate experimental conditions in order to obtain superior spatial resolution close to the physical limit even in ferrous materials [7]. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. EUV local CDU healing performance and modeling capability towards 5nm node

    NASA Astrophysics Data System (ADS)

    Jee, Tae Kwon; Timoshkov, Vadim; Choi, Peter; Rio, David; Tsai, Yu-Cheng; Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Schoofs, Stijn

    2017-10-01

    Both local variability and optical proximity correction (OPC) errors are big contributors to the edge placement error (EPE) budget which is closely related to the device yield. The post-litho contact hole healing will be demonstrated to meet after-etch local variability specifications using a low dose, 30mJ/cm2 dose-to-size, positive tone developed (PTD) resist with relevant throughput in high volume manufacturing (HVM). The total local variability of the node 5nm (N5) contact holes will be characterized in terms of local CD uniformity (LCDU), local placement error (LPE), and contact edge roughness (CER) using a statistical methodology. The CD healing process has complex etch proximity effects, so the OPC prediction accuracy is challenging to meet EPE requirements for the N5. Thus, the prediction accuracy of an after-etch model will be investigated and discussed using ASML Tachyon OPC model.

  18. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  19. Design of an HF-Band RFID System with Multiple Readers and Passive Tags for Indoor Mobile Robot Self-Localization

    PubMed Central

    Mi, Jian; Takahashi, Yasutake

    2016-01-01

    Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot. PMID:27483279

  20. Design of an HF-Band RFID System with Multiple Readers and Passive Tags for Indoor Mobile Robot Self-Localization.

    PubMed

    Mi, Jian; Takahashi, Yasutake

    2016-07-29

    Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot.

Top