Science.gov

Sample records for local approach methods

  1. A Local Coordinate Approach in the MLPG Method for Beam Problems

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Phillips, Dawn R.

    2002-01-01

    System matrices for Euler-Bernoulli beam problems for the meshless local Petrov-Galerkin (MLPG) method deteriorate as the number of nodes in the beam models are consistently increased. The reason for this behavior is explained. To overcome this difficulty and improve the accuracy of the solutions, a local coordinate approach for the evaluation of the generalized moving least squares shape functions and their derivatives is proposed. The proposed approach retains the accuracy of the MLPG methods.

  2. An Examination of Rater Performance on a Local Oral English Proficiency Test: A Mixed-Methods Approach

    ERIC Educational Resources Information Center

    Yan, Xun

    2014-01-01

    This paper reports on a mixed-methods approach to evaluate rater performance on a local oral English proficiency test. Three types of reliability estimates were reported to examine rater performance from different perspectives. Quantitative results were also triangulated with qualitative rater comments to arrive at a more representative picture of…

  3. Efficient and accurate local approximations to coupled-electron pair approaches: An attempt to revive the pair natural orbital method

    NASA Astrophysics Data System (ADS)

    Neese, Frank; Wennmohs, Frank; Hansen, Andreas

    2009-03-01

    Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Møller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol-1. Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500

  4. Approaches to localized NMR spectroscopy in vivo

    SciTech Connect

    Garwood, M.G.

    1985-01-01

    Nuclear magnetic resonance (NMR) techniques are developed which allow spatially localized spectra to be obtained from living tissue. The localization methods are noninvasive and exploit the enhanced sensitivity afforded by surface coil probes. Techniques are investigated by computer simulation and experimentally verified by the use of phantom samples. The feasibility and utility of the techniques developed in this research are demonstrated by /sup 31/P spatial localization experiments involving various in vivo organs. In the first part of the thesis, two feasible approaches to localized spectroscopy, which were developed by other laboratories are theoretically analyzed by computer simulation. An alternative approach is provided by the rotating frame zeugmatography experiment which affords chemical-shift spectra displayed as a function of penetration distance into the sample. The further modification of the rotating frame experiment is developed, the Fourier series window (FSW) approach, which utilizes various types of window functions to afford localization in one or a few tissue regions of interest with high sensitivity. Theoretical comparisons with depth pulse methods are also included, along with methods to refine adverse off-resonance behavior.

  5. Speeding up local correlation methods

    SciTech Connect

    Kats, Daniel

    2014-12-28

    We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.

  6. Enzyme-labeled Antigen Method: Development and Application of the Novel Approach for Identifying Plasma Cells Locally Producing Disease-specific Antibodies in Inflammatory Lesions

    PubMed Central

    Mizutani, Yasuyoshi; Shiogama, Kazuya; Onouchi, Takanori; Sakurai, Kouhei; Inada, Ken-ichi; Tsutsumi, Yutaka

    2016-01-01

    In chronic inflammatory lesions of autoimmune and infectious diseases, plasma cells are frequently observed. Antigens recognized by antibodies produced by the plasma cells mostly remain unclear. A new technique identifying these corresponding antigens may give us a breakthrough for understanding the disease from a pathophysiological viewpoint, simply because the immunocytes are seen within the lesion. We have developed an enzyme-labeled antigen method for microscopic identification of the antigen recognized by specific antibodies locally produced in plasma cells in inflammatory lesions. Firstly, target biotinylated antigens were constructed by the wheat germ cell-free protein synthesis system or through chemical biotinylation. Next, proteins reactive to antibodies in tissue extracts were screened and antibody titers were evaluated by the AlphaScreen method. Finally, with the enzyme-labeled antigen method using the biotinylated antigens as probes, plasma cells producing specific antibodies were microscopically localized in fixed frozen sections. Our novel approach visualized tissue plasma cells that produced 1) autoantibodies in rheumatoid arthritis, 2) antibodies against major antigens of Porphyromonas gingivalis in periodontitis or radicular cyst, and 3) antibodies against a carbohydrate antigen, Strep A, of Streptococcus pyogenes in recurrent tonsillitis. Evaluation of local specific antibody responses expectedly contributes to clarifying previously unknown processes in inflammatory disorders. PMID:27006517

  7. Time Discretization Approach to Dynamic Localization Conditions

    NASA Astrophysics Data System (ADS)

    Papp, E.

    An alternative wavefunction to the description of the dynamic localization of a charged particle moving on a one-dimensional lattice under the influence of a periodic time dependent electric field is written down. For this purpose the method of characteristics such as applied by Dunlap and Kenkre [Phys. Rev. B 34, 3625 (1986)] has been modified by using a different integration variable. Handling this wavefunction one is faced with the selection of admissible time values. This results in a conditionally exactly solvable problem, now by accounting specifically for the implementation of a time discretization working in conjunction with a related dynamic localization condition. In addition, one resorts to the strong field limit, which amounts to replace, to leading order, the large order zeros of the Bessel function J0(z), used before in connection with the cosinusoidal modulation, by integral multiples of π. Here z stands for the ratio between the field amplitude and the frequency. The modulation function of the electric field vanishes on the nodal points of the time grid, which stands for an effective field-free behavior. This opens the way to propose quickly tractable dynamic localization conditions for arbitrary periodic modulations. We have also found that the present time discretization approach produces the minimization of the mean square displacement characterizing the usual exact wavefunction. Other realizations and comparisons have also been presented.

  8. A local approach for focussed Bayesian fusion

    NASA Astrophysics Data System (ADS)

    Sander, Jennifer; Heizmann, Michael; Goussev, Igor; Beyerer, Jürgen

    2009-04-01

    Local Bayesian fusion approaches aim to reduce high storage and computational costs of Bayesian fusion which is separated from fixed modeling assumptions. Using the small world formalism, we argue why this proceeding is conform with Bayesian theory. Then, we concentrate on the realization of local Bayesian fusion by focussing the fusion process solely on local regions that are task relevant with a high probability. The resulting local models correspond then to restricted versions of the original one. In a previous publication, we used bounds for the probability of misleading evidence to show the validity of the pre-evaluation of task specific knowledge and prior information which we perform to build local models. In this paper, we prove the validity of this proceeding using information theoretic arguments. For additional efficiency, local Bayesian fusion can be realized in a distributed manner. Here, several local Bayesian fusion tasks are evaluated and unified after the actual fusion process. For the practical realization of distributed local Bayesian fusion, software agents are predestinated. There is a natural analogy between the resulting agent based architecture and criminal investigations in real life. We show how this analogy can be used to improve the efficiency of distributed local Bayesian fusion additionally. Using a landscape model, we present an experimental study of distributed local Bayesian fusion in the field of reconnaissance, which highlights its high potential.

  9. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  10. Local electric dipole moments: A generalized approach.

    PubMed

    Groß, Lynn; Herrmann, Carmen

    2016-09-30

    We present an approach for calculating local electric dipole moments for fragments of molecular or supramolecular systems. This is important for understanding chemical gating and solvent effects in nanoelectronics, atomic force microscopy, and intensities in infrared spectroscopy. Owing to the nonzero partial charge of most fragments, "naively" defined local dipole moments are origin-dependent. Inspired by previous work based on Bader's atoms-in-molecules (AIM) partitioning, we derive a definition of fragment dipole moments which achieves origin-independence by relying on internal reference points. Instead of bond critical points (BCPs) as in existing approaches, we use as few reference points as possible, which are located between the fragment and the remainder(s) of the system and may be chosen based on chemical intuition. This allows our approach to be used with AIM implementations that circumvent the calculation of critical points for reasons of computational efficiency, for cases where no BCPs are found due to large interfragment distances, and with local partitioning schemes other than AIM which do not provide BCPs. It is applicable to both covalently and noncovalently bound systems. © 2016 Wiley Periodicals, Inc.

  11. Method for localizing heating in tumor tissue

    DOEpatents

    Doss, James D.; McCabe, Charles W.

    1977-04-12

    A method for a localized tissue heating of tumors is disclosed. Localized radio frequency current fields are produced with specific electrode configurations. Several electrode configurations are disclosed, enabling variations in electrical and thermal properties of tissues to be exploited.

  12. LOCALIZING THE RANGELAND HEALTH METHOD FOR SOUTHEASTERN ARIZONA

    EPA Science Inventory

    The interagency manual Interpreting Indicators of Rangeland Health, Version 4 (Technical Reference 1734-6) provides a method for making rangeland health assessments. The manual recommends that the rangeland health assessment approach be adapted to local conditions. This technica...

  13. Methods and strategies of object localization

    NASA Technical Reports Server (NTRS)

    Shao, Lejun; Volz, Richard A.

    1989-01-01

    An important property of an intelligent robot is to be able to determine the location of an object in 3-D space. A general object localization system structure is proposed, some important issues on localization discussed, and an overview given for current available object localization algorithms and systems. The algorithms reviewed are characterized by their feature extracting and matching strategies; the range finding methods; the types of locatable objects; and the mathematical formulating methods.

  14. Signal localization: a new approach in signal discovery.

    PubMed

    Malov, Sergey V; Antonik, Alexey; Tang, Minzhong; Berred, Alexandre; Zeng, Yi; O'Brien, Stephen J

    2017-01-01

    A new approach for statistical association signal identification is developed in this paper. We consider a strategy for nonprecise signal identification by extending the well-known signal detection and signal identification methods applicable to the multiple testing problem. Collection of statistical instruments under the presented approach is much broader than under the traditional signal identification methods, allowing more efficient signal discovery. Further assessments of maximal value and average statistics in signal discovery are improved. While our method does not attempt to detect individual predictors, it instead detects sets of predictors that are jointly associated with the outcome. Therefore, an important application would be in genome wide association study (GWAS), where it can be used to detect genes which influence the phenotype but do not contain any individually significant single nucleotide polymorphism (SNP). We compare power of the signal identification method based on extremes of single p-values with the signal localization method based on average statistics for logarithms of p-values. A simulation analysis informs the application of signal localization using the average statistics for wide signals discovery in Gaussian white noise process. We apply average statistics and the localization method to GWAS to discover better gene influences of regulating loci in a Chinese cohort developed for risk of nasopharyngeal carcinoma (NPC).

  15. Approaches to local climate action in Colorado

    NASA Astrophysics Data System (ADS)

    Huang, Y. D.

    2011-12-01

    Though climate change is a global problem, the impacts are felt on the local scale; it follows that the solutions must come at the local level. Fortunately, many cities and municipalities are implementing climate mitigation (or climate action) policies and programs. However, they face many procedural and institutional barriers to their efforts, such of lack of expertise or data, limited human and financial resources, and lack of community engagement (Krause 2011). To address the first obstacle, thirteen in-depth case studies were done of successful model practices ("best practices") of climate action programs carried out by various cities, counties, and organizations in Colorado, and one outside Colorado, and developed into "how-to guides" for other municipalities to use. Research was conducted by reading documents (e.g. annual reports, community guides, city websites), email correspondence with program managers and city officials, and via phone interviews. The information gathered was then compiled into a series of reports containing a narrative description of the initiative; an overview of the plan elements (target audience and goals); implementation strategies and any indicators of success to date (e.g. GHG emissions reductions, cost savings); and the adoption or approval process, as well as community engagement efforts and marketing or messaging strategies. The types of programs covered were energy action plans, energy efficiency programs, renewable energy programs, and transportation and land use programs. Between the thirteen case studies, there was a range of approaches to implementing local climate action programs, examined along two dimensions: focus on climate change (whether it was direct/explicit or indirect/implicit) and extent of government authority. This benchmarking exercise affirmed the conventional wisdom propounded by Pitt (2010), that peer pressure (that is, the presence of neighboring jurisdictions with climate initiatives), the level of

  16. A novel eye localization method with rotation invariance.

    PubMed

    Ren, Yan; Wang, Shuang; Hou, Biao; Ma, Jingjing

    2014-01-01

    This paper presents a novel learning method for precise eye localization, a challenge to be solved in order to improve the performance of face processing algorithms. Few existing approaches can directly detect and localize eyes with arbitrary angels in predicted eye regions, face images, and original portraits at the same time. To preserve rotation invariant property throughout the entire eye localization framework, a codebook of invariant local features is proposed for the representation of eye patterns. A heat map is then generated by integrating a 2-class sparse representation classifier with a pyramid-like detecting and locating strategy to fulfill the task of discriminative classification and precise localization. Furthermore, a series of prior information is adopted to improve the localization precision and accuracy. Experimental results on three different databases show that our method is capable of effectively locating eyes in arbitrary rotation situations (360° in plane).

  17. Dynamically screened local correlation method using enveloping localized orbitals.

    PubMed

    Auer, Alexander A; Nooijen, Marcel

    2006-07-14

    In this paper we present a local coupled cluster approach based on a dynamical screening scheme, in which amplitudes are either calculated at the coupled cluster level (in this case CCSD) or at the level of perturbation theory, employing a threshold driven procedure based on MP2 energy increments. This way, controllable accuracy and smooth convergence towards the exact result are obtained in the framework of an a posteriori approximation scheme. For the representation of the occupied space a new set of local orbitals is presented with the size of a minimal basis set. This set is atom centered, is nonorthogonal, and has shapes which are fairly independent of the details of the molecular system of interest. Two slightly different versions of combined local coupled cluster and perturbation theory equations are considered. In the limit both converge to the untruncated CCSD result. Benchmark calculations for four systems (heptane, serine, water hexamer, and oxadiazole-2-oxide) are carried out, and decay of the amplitudes, truncation error, and convergence towards the exact CCSD result are analyzed.

  18. Dynamically screened local correlation method using enveloping localized orbitals

    NASA Astrophysics Data System (ADS)

    Auer, Alexander A.; Nooijen, Marcel

    2006-07-01

    In this paper we present a local coupled cluster approach based on a dynamical screening scheme, in which amplitudes are either calculated at the coupled cluster level (in this case CCSD) or at the level of perturbation theory, employing a threshold driven procedure based on MP2 energy increments. This way, controllable accuracy and smooth convergence towards the exact result are obtained in the framework of an a posteriori approximation scheme. For the representation of the occupied space a new set of local orbitals is presented with the size of a minimal basis set. This set is atom centered, is nonorthogonal, and has shapes which are fairly independent of the details of the molecular system of interest. Two slightly different versions of combined local coupled cluster and perturbation theory equations are considered. In the limit both converge to the untruncated CCSD result. Benchmark calculations for four systems (heptane, serine, water hexamer, and oxadiazole-2-oxide) are carried out, and decay of the amplitudes, truncation error, and convergence towards the exact CCSD result are analyzed.

  19. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment.

  20. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  1. Control methods for localization of nonlinear waves.

    PubMed

    Porubov, Alexey; Andrievsky, Boris

    2017-03-06

    A general form of a distributed feedback control algorithm based on the speed-gradient method is developed. The goal of the control is to achieve nonlinear wave localization. It is shown by example of the sine-Gordon equation that the generation and further stable propagation of a localized wave solution of a single nonlinear partial differential equation may be obtained independently of the initial conditions. The developed algorithm is extended to coupled nonlinear partial differential equations to obtain consistent localized wave solutions at rather arbitrary initial conditions.This article is part of the themed issue 'Horizons of cybernetical physics'.

  2. Control methods for localization of nonlinear waves

    NASA Astrophysics Data System (ADS)

    Porubov, Alexey; Andrievsky, Boris

    2017-03-01

    A general form of a distributed feedback control algorithm based on the speed-gradient method is developed. The goal of the control is to achieve nonlinear wave localization. It is shown by example of the sine-Gordon equation that the generation and further stable propagation of a localized wave solution of a single nonlinear partial differential equation may be obtained independently of the initial conditions. The developed algorithm is extended to coupled nonlinear partial differential equations to obtain consistent localized wave solutions at rather arbitrary initial conditions. This article is part of the themed issue 'Horizons of cybernetical physics'.

  3. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-05-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  4. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-12-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  5. Local control approach to ultrafast electron transfer

    NASA Astrophysics Data System (ADS)

    Vindel-Zandbergen, Patricia; Meier, Christoph; Sola, Ignacio R.

    2016-10-01

    We study ultrafast electron transfer between separated nuclei using local control theory. By imposing electron ionization and electron transport through the continuum, different local control formulations are used to increase the yield of retrapping the electron at the desired nuclei. The control mechanism is based on impulsive de-excitation. Both symmetric and asymmetric nuclear arrangements are analyzed, as well as the role of the nuclear motion.

  6. A sensorimotor approach to sound localization.

    PubMed

    Aytekin, Murat; Moss, Cynthia F; Simon, Jonathan Z

    2008-03-01

    Sound localization is known to be a complex phenomenon, combining multisensory information processing, experience-dependent plasticity, and movement. Here we present a sensorimotor model that addresses the question of how an organism could learn to localize sound sources without any a priori neural representation of its head-related transfer function or prior experience with auditory spatial information. We demonstrate quantitatively that the experience of the sensory consequences of its voluntary motor actions allows an organism to learn the spatial location of any sound source. Using examples from humans and echolocating bats, our model shows that a naive organism can learn the auditory space based solely on acoustic inputs and their relation to motor states.

  7. Optic disk localization by a robust fusion method

    NASA Astrophysics Data System (ADS)

    Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin

    2013-02-01

    The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.

  8. Source Localization using Stochastic Approximation and Least Squares Methods

    SciTech Connect

    Sahyoun, Samir S.; Djouadi, Seddik M.; Qi, Hairong; Drira, Anis

    2009-03-05

    This paper presents two approaches to locate the source of a chemical plume; Nonlinear Least Squares and Stochastic Approximation (SA) algorithms. Concentration levels of the chemical measured by special sensors are used to locate this source. Non-linear Least Squares technique is applied at different noise levels and compared with the localization using SA. For a noise corrupted data collected from a distributed set of chemical sensors, we show that SA methods are more efficient than Least Squares method. SA methods are often better at coping with noisy input information than other search methods.

  9. Nonlinear damage detection and localization using a time domain approach

    NASA Astrophysics Data System (ADS)

    Boccardi, S.; Calla, D.-B.; Malfense Fierro, G.-P.; Ciampa, F.; Meo, M.

    2016-04-01

    This paper presents a damage detection and localization technique based on nonlinear elastic waves propagation in a damage composite laminate. The proposed method relies on the time of arrival estimation of the second harmonic nonlinear response obtained with second order phase symmetry analysis filtering and burst excitation. The Akaike Information Criterion approach was used to estimate the arrival times measured by six receiver transducers. Then, a combination of Newton's method and unconstrained optimization was employed to solve a system of nonlinear equations in order to obtain the material damage coordinates. To validate this methodology, experimental tests were carried out on a damaged composite plate. The results showed that the technique allows calculating the damage position with high accuracy (maximum error ~5 mm).

  10. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  11. Using Local Born and Local Rytov Fourier Modeling and Migration Methods for Investigation of Heterogeneous Structures

    SciTech Connect

    Fehler, M.C.; Huang, L.-J.

    1998-12-10

    During the past few years, there has been interest in developing migration and forward modeling approaches that are both fast and reliable particularly in regions that have rapid spatial variations in structure. The authors have been investigating a suite of modeling and migration methods that are implemented in the wavenumber-space domains and operate on data in the frequency domain. The best known example of these methods is the split-step Fourier method (SSF). Two of the methods that the authors have developed are the extended local Born Fourier (ELBF) approach and the extended local Rytov Fourier (ELRF) approach. Both methods are based on solutions of the scalar (constant density) wave equation, are computationally fast and can reliably model effects of both deterministic and random structures. The authors have investigated their reliability for migrating both 2D synthetic data and real 2D field data. The authors have found that the methods give images that are better than those that can be obtained using other methods like the SSF and Kirchhoff migration approaches. More recently, the authors have developed an approach for solving the acoustic (variable density) wave equation and have begun to investigate its applicability for modeling one-way wave propagation. The methods will be introduced and their ability to model seismic wave propagation and migrate seismic data will be investigated. The authors will also investigate their capability to model forward wave propagation through random media and to image zones of small scale heterogeneity such as those associated with zones of high permeability.

  12. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  13. Using the Storypath Approach to Make Local Government Understandable

    ERIC Educational Resources Information Center

    McGuire, Margit E.; Cole, Bronwyn

    2008-01-01

    Learning about local government seems boring and irrelevant to most young people, particularly to students from high-poverty backgrounds. The authors explore a promising approach for solving this problem, Storypath, which engages students in authentic learning and active citizenship. The Storypath approach is based on a narrative in which students…

  14. Field Theory Approach to Many-Body Localization

    NASA Astrophysics Data System (ADS)

    Altland, Alexander; Micklitz, Tobias

    2017-03-01

    We introduce an analytic approach to many-body localization (MBL) in random spin chains. We consider MBL within a first quantized framework where it becomes a localization phenomenon in the high-dimensional lattice defined by the Hilbert space of the clean system. Designed in analogy with the field-theory description of single particle localization, our approach describes wave package propagation on that lattice after a disorder average has been performed and the system is controlled by only a few universal parameters. We discuss the stability of an ergodic weak disorder and a localized strong disorder phase, respectively, and demonstrate that the latter is protected by mechanisms which put MBL outside the universality class of Anderson localization.

  15. A PDE-Based Fast Local Level Set Method

    NASA Astrophysics Data System (ADS)

    Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo

    1999-11-01

    We develop a fast method to localize the level set method of Osher and Sethian (1988, J. Comput. Phys.79, 12) and address two important issues that are intrinsic to the level set method: (a) how to extend a quantity that is given only on the interface to a neighborhood of the interface; (b) how to reset the level set function to be a signed distance function to the interface efficiently without appreciably moving the interface. This fast local level set method reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement. Our approach differs from previous related works in that we extract all the information needed from the level set function (or functions in multiphase flow) and do not need to find explicitly the location of the interface in the space domain. The complexity of our method to do tasks such as extension and distance reinitialization is O(N), where N is the number of points in space, not O(N log N) as in works by Sethian (1996, Proc. Nat. Acad. Sci. 93, 1591) and Helmsen and co-workers (1996, SPIE Microlithography IX, p. 253). This complexity estimation is also valid for quite general geometrically based front motion for our localized method.

  16. Locally Compact Quantum Groups. A von Neumann Algebra Approach

    NASA Astrophysics Data System (ADS)

    Van Daele, Alfons

    2014-08-01

    In this paper, we give an alternative approach to the theory of locally compact quantum groups, as developed by Kustermans and Vaes. We start with a von Neumann algebra and a comultiplication on this von Neumann algebra. We assume that there exist faithful left and right Haar weights. Then we develop the theory within this von Neumann algebra setting. In [Math. Scand. 92 (2003), 68-92] locally compact quantum groups are also studied in the von Neumann algebraic context. This approach is independent of the original C^*-algebraic approach in the sense that the earlier results are not used. However, this paper is not really independent because for many proofs, the reader is referred to the original paper where the C^*-version is developed. In this paper, we give a completely self-contained approach. Moreover, at various points, we do things differently. We have a different treatment of the antipode. It is similar to the original treatment in [Ann. Sci. & #201;cole Norm. Sup. (4) 33 (2000), 837-934]. But together with the fact that we work in the von Neumann algebra framework, it allows us to use an idea from [Rev. Roumaine Math. Pures Appl. 21 (1976), 1411-1449] to obtain the uniqueness of the Haar weights in an early stage. We take advantage of this fact when deriving the other main results in the theory. We also give a slightly different approach to duality. Finally, we collect, in a systematic way, several important formulas. In an appendix, we indicate very briefly how the C^*-approach and the von Neumann algebra approach eventually yield the same objects. The passage from the von Neumann algebra setting to the C^*-algebra setting is more or less standard. For the other direction, we use a new method. It is based on the observation that the Haar weights on the C^*-algebra extend to weights on the double dual with central support and that all these supports are the same. Of course, we get the von Neumann algebra by cutting down the double dual with this unique

  17. [A non-local means approach for PET image denoising].

    PubMed

    Yin, Yong; Sun, Weifeng; Lu, Jie; Liu, Tonghai

    2010-04-01

    Denoising is an important issue for medical image processing. Based on the analysis of the Non-local means algorithm recently reported by Buades A, et al. in international journals we herein propose adapting it for PET image denoising. Experimental de-noising results for real clinical PET images show that Non-local means method is superior to median filtering and wiener filtering methods and it can suppress noise in PET images effectively and preserve important details of structure for diagnosis.

  18. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  19. Local-basis-function approach to computed tomography

    NASA Astrophysics Data System (ADS)

    Hanson, K. M.; Wecksung, G. W.

    1985-12-01

    In the local basis-function approach, a reconstruction is represented as a linear expansion of basis functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computational effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integrals over a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying local constraints on reconstruction values, such as upper and lower limits. Since a reconstruction is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function.

  20. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  1. Quantifying optimal accuracy of local primary sequence bioinformatics methods

    NASA Astrophysics Data System (ADS)

    Aalberts, Daniel

    2005-03-01

    Traditional bioinformatics methods scan primary sequences for local patterns. It is important to assess how accurate local primary sequence methods can be. We study the problem of donor pre-mRNA splice site recognition, where the sequence overlaps between real and decoy data sets can be quantified, exposing the intrinsic limitations of the performance of local primary sequence methods. We assess the accuracy of local primary sequence methods generally by studying how they scale with dataset size and demonstrate that our new Primary Sequence Ranking methods have superior performance. Our Primary Sequence Ranking analysis tools are available at tt http://rna.williams.edu/

  2. The Local Variational Multiscale Method for Turbulence Simulation.

    SciTech Connect

    Collis, Samuel Scott; Ramakrishnan, Srinivas

    2005-05-01

    Accurate and efficient turbulence simulation in complex geometries is a formidable chal-lenge. Traditional methods are often limited by low accuracy and/or restrictions to simplegeometries. We explore the merger of Discontinuous Galerkin (DG) spatial discretizationswith Variational Multi-Scale (VMS) modeling, termed Local VMS (LVMS), to overcomethese limitations. DG spatial discretizations support arbitrarily high-order accuracy on un-structured grids amenable for complex geometries. Furthermore, high-order, hierarchicalrepresentation within DG provides a natural framework fora prioriscale separation crucialfor VMS implementation. We show that the combined benefits of DG and VMS within theLVMS method leads to promising new approach to LES for use in complex geometries.The efficacy of LVMS for turbulence simulation is assessed by application to fully-developed turbulent channelflow. First, a detailed spatial resolution study is undertakento record the effects of the DG discretization on turbulence statistics. Here, the localhp[?]refinement capabilites of DG are exploited to obtain reliable low-order statistics effi-ciently. Likewise, resolution guidelines for simulating wall-bounded turbulence using DGare established. We also explore the influence of enforcing Dirichlet boundary conditionsindirectly through numericalfluxes in DG which allows the solution to jump (slip) at thechannel walls. These jumps are effective in simulating the influence of the wall commen-surate with the local resolution and this feature of DG is effective in mitigating near-wallresolution requirements. In particular, we show that by locally modifying the numericalviscousflux used at the wall, we are able to regulate the near-wall slip through a penaltythat leads to improved shear-stress predictions. This work, demonstrates the potential ofthe numerical viscousflux to act as a numerically consistent wall-model and this successwarrents future research.As in any high-order numerical method some

  3. SubCellProt: predicting protein subcellular localization using machine learning approaches.

    PubMed

    Garg, Prabha; Sharma, Virag; Chaudhari, Pradeep; Roy, Nilanjan

    2009-01-01

    High-throughput genome sequencing projects continue to churn out enormous amounts of raw sequence data. However, most of this raw sequence data is unannotated and, hence, not very useful. Among the various approaches to decipher the function of a protein, one is to determine its localization. Experimental approaches for proteome annotation including determination of a protein's subcellular localizations are very costly and labor intensive. Besides the available experimental methods, in silico methods present alternative approaches to accomplish this task. Here, we present two machine learning approaches for prediction of the subcellular localization of a protein from the primary sequence information. Two machine learning algorithms, k Nearest Neighbor (k-NN) and Probabilistic Neural Network (PNN) were used to classify an unknown protein into one of the 11 subcellular localizations. The final prediction is made on the basis of a consensus of the predictions made by two algorithms and a probability is assigned to it. The results indicate that the primary sequence derived features like amino acid composition, sequence order and physicochemical properties can be used to assign subcellular localization with a fair degree of accuracy. Moreover, with the enhanced accuracy of our approach and the definition of a prediction domain, this method can be used for proteome annotation in a high throughput manner. SubCellProt is available at www.databases.niper.ac.in/SubCellProt.

  4. Fourier transform methods in local gravity modeling

    NASA Technical Reports Server (NTRS)

    Harrison, J. C.; Dickinson, M.

    1989-01-01

    New algorithms were derived for computing terrain corrections, all components of the attraction of the topography at the topographic surface and the gradients of these attractions. These algoriithms utilize fast Fourier transforms, but, in contrast to methods currently in use, all divergences of the integrals are removed during the analysis. Sequential methods employing a smooth intermediate reference surface were developed to avoid the very large transforms necessary when making computations at high resolution over a wide area. A new method for the numerical solution of Molodensky's problem was developed to mitigate the convergence difficulties that occur at short wavelengths with methods based on a Taylor series expansion. A trial field on a level surface is continued analytically to the topographic surface, and compared with that predicted from gravity observations. The difference is used to compute a correction to the trial field and the process iterated. Special techniques are employed to speed convergence and prevent oscillations. Three different spectral methods for fitting a point-mass set to a gravity field given on a regular grid at constant elevation are described. Two of the methods differ in the way that the spectrum of the point-mass set, which extends to infinite wave number, is matched to that of the gravity field which is band-limited. The third method is essentially a space-domain technique in which Fourier methods are used to solve a set of simultaneous equations.

  5. Accounting for Linkage Disequilibrium in genome scans for selection without individual genotypes: the local score approach.

    PubMed

    Fariello, María Inés; Boitard, Simon; Mercier, Sabine; Robelin, David; Faraut, Thomas; Arnould, Cécile; Recoquillay, Julien; Bouchez, Olivier; Salin, Gérald; Dehais, Patrice; Gourichon, David; Leroux, Sophie; Pitel, Frédérique; Leterrier, Christine; SanCristobal, Magali

    2017-04-10

    Detecting genomic footprints of selection is an important step in the understanding of evolution. Accounting for linkage disequilibrium in genome scans increases detection power, but haplotype-based methods require individual genotypes and are not applicable on pool-sequenced samples. We propose to take advantage of the local score approach to account for linkage disequilibrium in genome scans for selection, cumulating (possibly small) signals from single markers over a genomic segment, to clearly pinpoint a selection signal. Using computer simulations, we demonstrate that this approach detects selection with higher power than several state-of-the-art single marker, windowing or haplotype-based approaches. We illustrate this on two benchmark data sets including individual genotypes, for which we obtain similar results with the local score and one haplotype-based approach. Finally, we apply the local score approach to Pool-Seq data obtained from a divergent selection experiment on behavior in quail, and obtain precise and biologically coherent selection signals: while competing methods fail to highlight any clear selection signature, our method detects several regions involving genes known to act on social responsiveness or autistic traits. Although we focus here on the detection of positive selection from multiple population data, the local score approach is general and can be applied to other genome scans for selection or other genome-wide analyses such as GWAS. This article is protected by copyright. All rights reserved.

  6. Developmental differences in auditory detection and localization of approaching vehicles.

    PubMed

    Barton, Benjamin K; Lew, Roger; Kovesdi, Casey; Cottrell, Nicholas D; Ulrich, Thomas

    2013-04-01

    Pedestrian safety is a significant problem in the United States, with thousands being injured each year. Multiple risk factors exist, but one poorly understood factor is pedestrians' ability to attend to vehicles using auditory cues. Auditory information in the pedestrian setting is increasing in importance with the growing number of quieter hybrid and all-electric vehicles on America's roadways that do not emit sound cues pedestrians expect from an approaching vehicle. Our study explored developmental differences in pedestrians' detection and localization of approaching vehicles. Fifty children ages 6-9 years, and 35 adults participated. Participants' performance varied significantly by age, and with increasing speed and direction of the vehicle's approach. Results underscore the importance of understanding children's and adults' use of auditory cues for pedestrian safety and highlight the need for further research.

  7. Mixed Methods Approaches in Family Science Research

    ERIC Educational Resources Information Center

    Plano Clark, Vicki L.; Huddleston-Casas, Catherine A.; Churchill, Susan L.; Green, Denise O'Neil; Garrett, Amanda L.

    2008-01-01

    The complex phenomena of interest to family scientists require the use of quantitative and qualitative approaches. Researchers across the social sciences are now turning to mixed methods designs that combine these two approaches. Mixed methods research has great promise for addressing family science topics, but only if researchers understand the…

  8. A Localized Tau Method PDE Solver

    NASA Technical Reports Server (NTRS)

    Cottam, Russell

    2002-01-01

    In this paper we present a new form of the collocation method that allows one to find very accurate solutions to time marching problems without the unwelcome appearance of Gibb's phenomenon oscillations. The basic method is applicable to any partial differential equation whose solution is a continuous, albeit possibly rapidly varying function. Discontinuous functions are dealt with by replacing the function in a small neighborhood of the discontinuity with a spline that smoothly connects the function segments on either side of the discontinuity. This will be demonstrated when the solution to the inviscid Burgers equation is discussed.

  9. Dual mode stereotactic localization method and application

    DOEpatents

    Keppel, Cynthia E.; Barbosa, Fernando Jorge; Majewski, Stanislaw

    2002-01-01

    The invention described herein combines the structural digital X-ray image provided by conventional stereotactic core biopsy instruments with the additional functional metabolic gamma imaging obtained with a dedicated compact gamma imaging mini-camera. Before the procedure, the patient is injected with an appropriate radiopharmaceutical. The radiopharmaceutical uptake distribution within the breast under compression in a conventional examination table expressed by the intensity of gamma emissions is obtained for comparison (co-registration) with the digital mammography (X-ray) image. This dual modality mode of operation greatly increases the functionality of existing stereotactic biopsy devices by yielding a much smaller number of false positives than would be produced using X-ray images alone. The ability to obtain both the X-ray mammographic image and the nuclear-based medicine gamma image using a single device is made possible largely through the use of a novel, small and movable gamma imaging camera that permits its incorporation into the same table or system as that currently utilized to obtain X-ray based mammographic images for localization of lesions.

  10. New Methods for Crafting Locally Decision-Relevant Scenarios

    NASA Astrophysics Data System (ADS)

    Lempert, R. J.

    2015-12-01

    Scenarios can play an important role in helping decision makers to imagine future worlds, both good and bad, different than the one with which we are familiar and to take concrete steps now to address the risks generated by climate change. At their best, scenarios can effectively represent deep uncertainty; integrate over multiple domains; and enable parties with different expectation and values to expand the range of futures they consider, to see the world from different points of view, and to grapple seriously with the potential implications of surprising or inconvenient futures. These attributes of scenario processes can prove crucial in helping craft effective responses to climate change. But traditional scenario methods can also fail to overcome difficulties related to choosing, communicating, and using scenarios to identify, evaluate, and reach consensus on appropriate policies. Such challenges can limit scenario's impact in broad public discourse. This talk will demonstrate how new decision support approaches can employ new quantitative tools that allow scenarios to emerge from a process of deliberation with analysis among stakeholders, rather than serve as inputs to it, thereby increasing the impacts of scenarios on decision making. This talk will demonstrate these methods in the design of a decision support tool to help residents of low lying coastal cities grapple with the long-term risks of sea level rise. In particular, this talk will show how information from the IPCC SSP's can be combined with local information to provide a rich set of locally decision-relevant information.

  11. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  12. An alternative subspace approach to EEG dipole source localization.

    PubMed

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-21

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  13. Emission Inventory: A local agency`s approach

    SciTech Connect

    Kestler, S.L.; Bien, D.L.; Gruber, L.R.

    1996-12-31

    The Department of Environmental Services-Air Quality Management (D.O.E.S.-A.Q.M.) in southwestern Ohio has inventoried stationary sources since the mid-1970`s. That inventory has changed over the years from one of unknown data quality to one of high quality and substantial use by the public, agency personnel and industry. Since 1990, the scope of the agency`s inventory has broadened to include the compilation of a local area source inventory every three years. This presentation explores a local agency`s {open_quotes}real life{close_quotes} approach to compiling their emission inventory. We will discuss the improvements made and the pitfalls encountered in the inventory process over the years.

  14. Global/local methods research using the CSM testbed

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. Hayden, Jr.; Thompson, Danniella M.

    1990-01-01

    Research activities in global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.

  15. A Locally-Exact Homogenization Approach for Periodic Heterogeneous Materials

    SciTech Connect

    Drago, Anthony S.; Pindera, Marek-Jerzy

    2008-02-15

    Elements of the homogenization theory are utilized to develop a new micromechanics approach for unit cells of periodic heterogeneous materials based on locally-exact elasticity solutions. Closed-form expressions for the homogenized moduli of unidirectionally-reinforced heterogeneous materials are obtained in terms of Hill's strain concentration matrices valid under arbitrary combined loading, which yield the homogenized Hooke's law. Results for simple unit cells with off-set fibers, which require the use of periodic boundary conditions, are compared with corresponding finite-element results demonstrating excellent correlation.

  16. Localization and cooperative communication methods for cognitive radio

    NASA Astrophysics Data System (ADS)

    Duval, Olivier

    We study localization of nearby nodes and cooperative communication for cognitive radios. Cognitive radios sensing their environment to estimate the channel gain between nodes can cooperate and adapt their transmission power to maximize the capacity of the communication between two nodes. We study the end-to-end capacity of a cooperative relaying scheme using orthogonal frequency-division modulation (OFDM) modulation, under power constraints for both the base station and the relay station. The relay uses amplify-and-forward and decode-and-forward cooperative relaying techniques to retransmit messages on a subset of the available subcarriers. The power used in the base station and the relay station transmitters is allocated to maximize the overall system capacity. The subcarrier selection and power allocation are obtained based on convex optimization formulations and an iterative algorithm. Additionally, decode-and-forward relaying schemes are allowed to pair source and relayed subcarriers to increase further the capacity of the system. The proposed techniques outperforms non-selective relaying schemes over a range of relay power budgets. Cognitive radios can be used for opportunistic access of the radio spectrum by detecting spectrum holes left unused by licensed primary users. We introduce a spectrum holes detection approach, which combines blind modulation classification, angle of arrival estimation and number of sources detection. We perform eigenspace analysis to determine the number of sources, and estimate their angles of arrival (AOA). In addition, we classify detected sources as primary or secondary users with their distinct second-orde one-conjugate cyclostationarity features. Extensive simulations carried out indicate that the proposed system identifies and locates individual sources correctly, even at -4 dB signal-to-noise ratios (SNR). In environments with a high density of scatterers, several wireless channels experience nonline-of-sight (NLOS

  17. Reactive Gas transport in soil: Kinetics versus Local Equilibrium Approach

    NASA Astrophysics Data System (ADS)

    Geistlinger, Helmut; Jia, Ruijan

    2010-05-01

    Gas transport through the unsaturated soil zone was studied using an analytical solution of the gas transport model that is mathematically equivalent to the Two-Region model. The gas transport model includes diffusive and convective gas fluxes, interphase mass transfer between the gas and water phase, and biodegradation. The influence of non-equilibrium phenomena, spatially variable initial conditions, and transient boundary conditions are studied. The objective of this paper is to compare the kinetic approach for interphase mass transfer with the standard local equilibrium approach and to find conditions and time-scales under which the local equilibrium approach is justified. The time-scale of investigation was limited to the day-scale, because this is the relevant scale for understanding gas emission from the soil zone with transient water saturation. For the first time a generalized mass transfer coefficient is proposed that justifies the often used steady-state Thin-Film mass transfer coefficient for small and medium water-saturated aggregates of about 10 mm. The main conclusion from this study is that non-equilibrium mass transfer depends strongly on the temporal and small-scale spatial distribution of water within the unsaturated soil zone. For regions with low water saturation and small water-saturated aggregates (radius about 1 mm) the local equilibrium approach can be used as a first approximation for diffusive gas transport. For higher water saturation and medium radii of water-saturated aggregates (radius about 10 mm) and for convective gas transport, the non-equilibrium effect becomes more and more important if the hydraulic residence time and the Damköhler number decrease. Relative errors can range up to 100% and more. While for medium radii the local equilibrium approach describes the main features both of the spatial concentration profile and the time-dependence of the emission rate, it fails completely for larger aggregates (radius about 100 mm

  18. A practical approach for outdoors distributed target localization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Béjar, Benjamín; Zazo, Santiago

    2012-12-01

    Wireless sensor networks are posed as the new communication paradigm where the use of small, low-complexity, and low-power devices is preferred over costly centralized systems. The spectra of potential applications of sensor networks is very wide, ranging from monitoring, surveillance, and localization, among others. Localization is a key application in sensor networks and the use of simple, efficient, and distributed algorithms is of paramount practical importance. Combining convex optimization tools with consensus algorithms we propose a distributed localization algorithm for scenarios where received signal strength indicator readings are used. We approach the localization problem by formulating an alternative problem that uses distance estimates locally computed at each node. The formulated problem is solved by a relaxed version using semidefinite relaxation technique. Conditions under which the relaxed problem yields to the same solution as the original problem are given and a distributed consensus-based implementation of the algorithm is proposed based on an augmented Lagrangian approach and primal-dual decomposition methods. Although suboptimal, the proposed approach is very suitable for its implementation in real sensor networks, i.e., it is scalable, robust against node failures and requires only local communication among neighboring nodes. Simulation results show that running an additional local search around the found solution can yield performance close to the maximum likelihood estimate.

  19. Comparison of local grid refinement methods for MODFLOW.

    PubMed

    Mehl, Steffen; Hill, Mary C; Leake, Stanley A

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions.

  20. Comparison of local grid refinement methods for MODFLOW

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.; Leake, S.A.

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).

  1. Modelling pathogen transmission: the interrelationship between local and global approaches.

    PubMed Central

    Turner, Joanne; Begon, Michael; Bowers, Roger G

    2003-01-01

    We describe two spatial (cellular automaton) host-pathogen models with contrasting types of transmission, where the biologically realistic transmission mechanisms are based entirely on 'local' interactions. The two models, fixed contact area (FCA) and fixed contact number (FCN), may be viewed as local 'equivalents' of commonly used global density- (and frequency-) dependent models. Their outputs are compared with each other and with the patterns generated by these global terms. In the FCN model, unoccupied cells are bypassed, but in the FCA model these impede pathogen spread, extending the period of the epidemic and reducing the prevalence of infection when the pathogen persists. Crucially, generalized linear modelling reveals that the global transmission terms betaSI and beta'SI/N are equally good at describing transmission in both the FCA and FCN models when infected individuals are homogeneously distributed and N is approximately constant, as at the quasi-equilibrium. However, when N varies, the global frequency-dependent term beta'SI/N is better than the density-dependent one, betaSI, at describing transmission in both the FCA and FCN models. Our approach may be used more generally to compare different local contact structures and select the most appropriate global transmission term. PMID:12590777

  2. Energy-Based Acoustic Source Localization Methods: A Survey

    PubMed Central

    Meng, Wei; Xiao, Wendong

    2017-01-01

    Energy-based source localization is an important problem in wireless sensor networks (WSNs), which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE) and nonlinear-least-squares (NLS) methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions. PMID:28212281

  3. Energy-Based Acoustic Source Localization Methods: A Survey.

    PubMed

    Meng, Wei; Xiao, Wendong

    2017-02-15

    Energy-based source localization is an important problem in wireless sensor networks (WSNs), which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE) and nonlinear-least-squares (NLS) methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions.

  4. Discontinuous Galerkin Methods and Local Time Stepping for Wave Propagation

    SciTech Connect

    Grote, M. J.; Mitkova, T.

    2010-09-30

    Locally refined meshes impose severe stability constraints on explicit time-stepping methods for the numerical simulation of time dependent wave phenomena. To overcome that stability restriction, local time-stepping methods are developed, which allow arbitrarily small time steps precisely where small elements in the mesh are located. When combined with a discontinuous Galerkin finite element discretization in space, which inherently leads to a diagonal mass matrix, the resulting numerical schemes are fully explicit. Starting from the classical Adams-Bashforth multi-step methods, local time stepping schemes of arbitrarily high accuracy are derived. Numerical experiments validate the theory and illustrate the usefulness of the proposed time integration schemes.

  5. Multi-scale non-local denoising method in neuroimaging.

    PubMed

    Chen, Yiping; Wang, Cheng; Wang, Liansheng

    2016-03-17

    Non-local means algorithm can remove image noise in a unique way that is contrary to traditional techniques. This is because it not only smooths the image but it also preserves the information details of the image. However, this method suffers from high computational complexity. We propose a multi-scale non-local means method in which adaptive multi-scale technique is implemented. In practice, based on each selected scale, the input image is divided into small blocks. Then, we remove the noise in the given pixel by using only one block. This can overcome the low efficiency problem caused by the original non-local means method. Our proposed method also benefits from the local average gradient orientation. In order to perform evaluation, we compared the processed images based on our technique with the ones by the original and the improved non-local means denoising method. Extensive experiments are conducted and results shows that our method is faster than the original and the improved non-local means method. It is also proven that our implemented method is robust enough to remove noise in the application of neuroimaging.

  6. Adaptive windowed range-constrained Otsu method using local information

    NASA Astrophysics Data System (ADS)

    Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie

    2016-01-01

    An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.

  7. Spectral Approach to Anderson Localization in a Disordered 2D Complex Plasma Crystal

    NASA Astrophysics Data System (ADS)

    Kostadinova, Eva; Liaw, Constanze; Matthews, Lorin; Busse, Kyle; Hyde, Truell

    2016-10-01

    In condensed matter, a crystal without impurities acts like a perfect conductor for a travelling wave-particle. As the level of impurities reaches a critical value, the resistance in the crystal increases and the travelling wave-particle experiences a transition from an extended to a localized state, which is called Anderson localization. Due to its wide applicability, the subject of Anderson localization has grown into a rich field in both physics and mathematics. Here, we introduce the mathematics behind the spectral approach to localization in infinite disordered systems and provide physical interpretation in context of both quantum mechanics and classical physics. We argue that the spectral analysis is an important contribution to localization theory since it avoids issues related to the use of boundary conditions, scaling, and perturbation. To test accuracy and applicability we apply the spectral approach to the case of a 2D hexagonal complex plasma crystal used as a macroscopic analog for a graphene-like medium. Complex plasma crystals exhibit characteristic distance and time scales, which are easily observable by video microscopy. As such, these strongly coupled many-particle systems are ideal for the study of localization phenomena. The goal of this research is to both expand the spectral method into the classical regime and show the potential of complex plasma as a macroscopic tool for localization experiments. NSF / DOE funding is gratefully acknowledged - PHY1414523 & PHY1262031.

  8. Perturbative approach for non local and high order derivative theories

    SciTech Connect

    Avilez, Ana A.; Vergara, J. David

    2009-04-20

    We propose a reduction method of classical phase space of high order derivative theories in singular and non singular cases. The mechanism is to reduce the high order phase space by imposing suplementary constraints, such that the evolution takes place in a submanifold where high order degrees of freedom are absent. The reduced theory is ordinary and is cured of the usual high order theories diseases, it approaches well low energy dynamics.

  9. Influence of skull modeling approaches on EEG source localization.

    PubMed

    Montes-Restrepo, Victoria; van Mierlo, Pieter; Strobbe, Gregor; Staelens, Steven; Vandenberghe, Stefaan; Hallez, Hans

    2014-01-01

    Electroencephalographic source localization (ESL) relies on an accurate model representing the human head for the computation of the forward solution. In this head model, the skull is of utmost importance due to its complex geometry and low conductivity compared to the other tissues inside the head. We investigated the influence of using different skull modeling approaches on ESL. These approaches, consisting in skull conductivity and geometry modeling simplifications, make use of X-ray computed tomography (CT) and magnetic resonance (MR) images to generate seven different head models. A head model with an accurately segmented skull from CT images, including spongy and compact bone compartments as well as some air-filled cavities, was used as the reference model. EEG simulations were performed for a configuration of 32 and 128 electrodes, and for both noiseless and noisy data. The results show that skull geometry simplifications have a larger effect on ESL than those of the conductivity modeling. This suggests that accurate skull modeling is important in order to achieve reliable results for ESL that are useful in a clinical environment. We recommend the following guidelines to be taken into account for skull modeling in the generation of subject-specific head models: (i) If CT images are available, i.e., if the geometry of the skull and its different tissue types can be accurately segmented, the conductivity should be modeled as isotropic heterogeneous. The spongy bone might be segmented as an erosion of the compact bone; (ii) when only MR images are available, the skull base should be represented as accurately as possible and the conductivity can be modeled as isotropic heterogeneous, segmenting the spongy bone directly from the MR image; (iii) a large number of EEG electrodes should be used to obtain high spatial sampling, which reduces the localization errors at realistic noise levels.

  10. Localized Surface Plasmon Resonance Biosensing: Current Challenges and Approaches

    PubMed Central

    Unser, Sarah; Bruzas, Ian; He, Jie; Sagle, Laura

    2015-01-01

    Localized surface plasmon resonance (LSPR) has emerged as a leader among label-free biosensing techniques in that it offers sensitive, robust, and facile detection. Traditional LSPR-based biosensing utilizes the sensitivity of the plasmon frequency to changes in local index of refraction at the nanoparticle surface. Although surface plasmon resonance technologies are now widely used to measure biomolecular interactions, several challenges remain. In this article, we have categorized these challenges into four categories: improving sensitivity and limit of detection, selectivity in complex biological solutions, sensitive detection of membrane-associated species, and the adaptation of sensing elements for point-of-care diagnostic devices. The first section of this article will involve a conceptual discussion of surface plasmon resonance and the factors affecting changes in optical signal detected. The following sections will discuss applications of LSPR biosensing with an emphasis on recent advances and approaches to overcome the four limitations mentioned above. First, improvements in limit of detection through various amplification strategies will be highlighted. The second section will involve advances to improve selectivity in complex media through self-assembled monolayers, “plasmon ruler” devices involving plasmonic coupling, and shape complementarity on the nanoparticle surface. The following section will describe various LSPR platforms designed for the sensitive detection of membrane-associated species. Finally, recent advances towards multiplexed and microfluidic LSPR-based devices for inexpensive, rapid, point-of-care diagnostics will be discussed. PMID:26147727

  11. Multilevel local refinement and multigrid methods for 3-D turbulent flow

    SciTech Connect

    Liao, C.; Liu, C.; Sung, C.H.; Huang, T.T.

    1996-12-31

    A numerical approach based on multigrid, multilevel local refinement, and preconditioning methods for solving incompressible Reynolds-averaged Navier-Stokes equations is presented. 3-D turbulent flow around an underwater vehicle is computed. 3 multigrid levels and 2 local refinement grid levels are used. The global grid is 24 x 8 x 12. The first patch is 40 x 16 x 20 and the second patch is 72 x 32 x 36. 4th order artificial dissipation are used for numerical stability. The conservative artificial compressibility method are used for further improvement of convergence. To improve the accuracy of coarse/fine grid interface of local refinement, flux interpolation method for refined grid boundary is used. The numerical results are in good agreement with experimental data. The local refinement can improve the prediction accuracy significantly. The flux interpolation method for local refinement can keep conservation for a composite grid, therefore further modify the prediction accuracy.

  12. Global and Local Sensitivity Analysis Methods for a Physical System

    ERIC Educational Resources Information Center

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  13. Ultrasound localization of the sacral plexus using a parasacral approach.

    PubMed

    Ben-Ari, Alon Y; Joshi, Rama; Uskova, Anna; Chelly, Jacques E

    2009-06-01

    In this report, we describe the feasibility of locating the sacral plexus nerve using a parasacral approach and an ultrasound-guided technique. The parasacral region using a curved probe (2-5 MHz) was scanned in 17 patients in search of the medial border of the ischial bone and the lateral border of the sacrum, which represent the limit of the greater sciatic foramen. In addition, attempts were made to identify the piriformis muscles and the gluteal arteries. The sacral plexus was identified at the level of the sciatic foramen as a round hyperechoic structure. The gluteal arteries were identified in 10 of 17 patients, but we failed to positively identify the piriformis muscle in any patient. To confirm localization of the sacral plexus, an insulated needle attached to a nerve stimulator was advanced and, in each case, a sacral plexus motor response was elicited (plantar flexion-12, dorsal flexion-1, hamstring muscle stimulation-3, gastrocnemius muscle stimulation-1-not recorded) at a current between 0.2 and 0.5 mA. No complications were observed. This report confirms the feasibility of using ultrasound to locate the sacral plexus using a parasacral approach.

  14. Community-Based Outdoor Education Using a Local Approach to Conservation

    ERIC Educational Resources Information Center

    Maeda, Kazushi

    2005-01-01

    Local people of a community interact with nature in a way that is mediated by their local cultures and shape their own environment. We need a local approach to conservation for the local environment adding to the political or technological approaches for global environmental problems such as the destruction of the ozone layer or global warming.…

  15. Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization

    PubMed Central

    Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona

    2016-01-01

    Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms. PMID:27258279

  16. Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.

    PubMed

    Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona

    2016-05-31

    Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.

  17. Damping filter method for obtaining spatially localized solutions.

    PubMed

    Teramura, Toshiki; Toh, Sadayoshi

    2014-05-01

    Spatially localized structures are key components of turbulence and other spatiotemporally chaotic systems. From a dynamical systems viewpoint, it is desirable to obtain corresponding exact solutions, though their existence is not guaranteed. A damping filter method is introduced to obtain variously localized solutions and adapted in two typical cases. This method introduces a spatially selective damping effect to make a good guess at the exact solution, and we can obtain an exact solution through a continuation with the damping amplitude. The first target is a steady solution to the Swift-Hohenberg equation, which is a representative of bistable systems in which localized solutions coexist and a model for spanwise-localized cases. Not only solutions belonging to the well-known snaking branches but also those belonging to isolated branches known as "isolas" are found with continuation paths between them in phase space extended with the damping amplitude. This indicates that this spatially selective excitation mechanism has an advantage in searching spatially localized solutions. The second target is a spatially localized traveling-wave solution to the Kuramoto-Sivashinsky equation, which is a model for streamwise-localized cases. Since the spatially selective damping effect breaks Galilean and translational invariances, the propagation velocity cannot be determined uniquely while the damping is active, and a singularity arises when these invariances are recovered. We demonstrate that this singularity can be avoided by imposing a simple condition, and a localized traveling-wave solution is obtained with a specific propagation speed.

  18. Local and Non-local Regularization Techniques in Emission (PET/SPECT) Tomographic Image Reconstruction Methods.

    PubMed

    Ahmad, Munir; Shahzad, Tasawar; Masood, Khalid; Rashid, Khalid; Tanveer, Muhammad; Iqbal, Rabail; Hussain, Nasir; Shahid, Abubakar; Fazal-E-Aleem

    2016-06-01

    Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images.

  19. Multi-Scale Jacobi Method for Anderson Localization

    NASA Astrophysics Data System (ADS)

    Imbrie, John Z.

    2015-11-01

    A new KAM-style proof of Anderson localization is obtained. A sequence of local rotations is defined, such that off-diagonal matrix elements of the Hamiltonian are driven rapidly to zero. This leads to the first proof via multi-scale analysis of exponential decay of the eigenfunction correlator (this implies strong dynamical localization). The method has been used in recent work on many-body localization (Imbrie in On many-body localization for quantum spin chains, arXiv:1403.7837 , 2014).

  20. Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids

    SciTech Connect

    Stoyanov, Miroslav K

    2013-09-01

    We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.

  1. Intelligent Resource Management for Local Area Networks: Approach and Evolution

    NASA Technical Reports Server (NTRS)

    Meike, Roger

    1988-01-01

    The Data Management System network is a complex and important part of manned space platforms. Its efficient operation is vital to crew, subsystems and experiments. AI is being considered to aid in the initial design of the network and to augment the management of its operation. The Intelligent Resource Management for Local Area Networks (IRMA-LAN) project is concerned with the application of AI techniques to network configuration and management. A network simulation was constructed employing real time process scheduling for realistic loads, and utilizing the IEEE 802.4 token passing scheme. This simulation is an integral part of the construction of the IRMA-LAN system. From it, a causal model is being constructed for use in prediction and deep reasoning about the system configuration. An AI network design advisor is being added to help in the design of an efficient network. The AI portion of the system is planned to evolve into a dynamic network management aid. The approach, the integrated simulation, project evolution, and some initial results are described.

  2. Data Processing Approach for Localizing Bio-magnetic Sources in the Brain

    NASA Astrophysics Data System (ADS)

    Pai, Hung-I.; Tseng, Chih-Yuan; Lee, H. C.

    2007-07-01

    Magnetoencephalography (MEG) provides dynamic spatial-temporal insight for neural activities in the cortex. Because the possible number of sources is far greater than the number of MEG detectors, the proposition to localize sources directly from MEG data is ill-posed. Here we develop a novel approach based on a sequence of data processing procedures that includes a clustering process, an intersection analysis, and an application of the maximum entropy method. We examine the performance of our method and compare it with the minimum-norm least-square inverse method using an artificial noisy MEG data.

  3. Local discretization method for overdamped Brownian motion on a potential with multiple deep wells

    NASA Astrophysics Data System (ADS)

    Nguyen, P. T. T.; Challis, K. J.; Jack, M. W.

    2016-11-01

    We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.

  4. Rescaled Local Interaction Simulation Approach for Shear Wave Propagation Modelling in Magnetic Resonance Elastography

    PubMed Central

    Packo, P.; Staszewski, W. J.; Uhl, T.

    2016-01-01

    Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808

  5. A Bayesian network approach for modeling local failure in lung cancer

    NASA Astrophysics Data System (ADS)

    Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam

    2011-03-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.

  6. A method of periodic pattern localization on document images

    NASA Astrophysics Data System (ADS)

    Chernov, Timofey S.; Nikolaev, Dmitry P.; Kliatskine, Vitali M.

    2015-12-01

    Periodic patterns often present on document images as holograms, watermarks or guilloche elements which are mostly used for fraud protection. Localization of such patterns lets an embedded OCR system to vary its settings depending on pattern presence in particular image regions and improves the precision of pattern removal to preserve as much useful data as possible. Many document images' noise detection and removal methods deal with unstructured noise or clutter on documents with simple background. In this paper we propose a method of periodic pattern localization on document images which uses discrete Fourier transform that works well on documents with complex background.

  7. Galaxy formation with local photoionization feedback - I. Methods

    NASA Astrophysics Data System (ADS)

    Kannan, R.; Stinson, G. S.; Macciò, A. V.; Hennawi, J. F.; Woods, R.; Wadsley, J.; Shen, S.; Robitaille, T.; Cantalupo, S.; Quinn, T. R.; Christensen, C.

    2014-01-01

    We present a first study of the effect of local photoionizing radiation on gas cooling in smoothed particle hydrodynamics simulations of galaxy formation. We explore the combined effect of ionizing radiation from young and old stellar populations. The method computes the effect of multiple radiative sources using the same tree algorithm as used for gravity, so it is computationally efficient and well resolved. The method foregoes calculating absorption and scattering in favour of a constant escape fraction for young stars to keep the calculation efficient enough to simulate the entire evolution of a galaxy in a cosmological context to the present day. This allows us to quantify the effect of the local photoionization feedback through the whole history of a galaxy's formation. The simulation of a Milky Way-like galaxy using the local photoionization model forms ˜40 per cent less stars than a simulation that only includes a standard uniform background UV field. The local photoionization model decreases star formation by increasing the cooling time of the gas in the halo and increasing the equilibrium temperature of dense gas in the disc. Coupling the local radiation field to gas cooling from the halo provides a preventive feedback mechanism which keeps the central disc light and produces slowly rising rotation curves without resorting to extreme feedback mechanisms. These preliminary results indicate that the effect of local photoionizing sources is significant and should not be ignored in models of galaxy formation.

  8. Tracking local anesthetic effects using a novel perceptual reference approach

    PubMed Central

    Ettlin, Dominik A.; Lukic, Nenad; Abazi, Jetmir; Widmayer, Sonja

    2016-01-01

    Drug effects of loco-regional anesthetics are commonly measured by unidimensional pain rating scales. These scales require subjects to transform their perceptual correlates of stimulus intensities onto a visual, verbal, or numerical construct that uses a unitless cognitive reference frame. The conceptual understanding and execution of this magnitude estimation task may vary among individuals and populations. To circumvent inherent shortcomings of conventional experimental pain scales, this study used a novel perceptual reference approach to track subjective sensory perceptions during onset of an analgesic nerve block. In 34 male subjects, nociceptive electric stimuli of 1-ms duration were repetitively applied to left (target) and right (reference) mandibular canines every 5 s for 600 s, with a side latency of 1 ms. Stimulus strength to the target canine was programmed to evoke a tolerable pain intensity perception and remained constant at this level throughout the experiment. A dose of 0.6 ml of articaine 4% was submucosally injected at the left mental foramen. Subjects then reported drug effects by adjusting the stimulus strength (in milliamperes) to the reference tooth, so that the perceived intensity in the reference tooth was equi-intense to the target tooth. Pain and stimulus perception offsets were indicated by subjects. Thus, the current approach for matching the sensory experience in one anatomic location after regional anesthesia allows detailed tracking of evolving perceptual changes in another location. This novel perceptual reference approach facilitates direct and accurate quantification of analgesic effects with high temporal resolution. We propose using this method for future experimental investigations of analgesic/anesthetic drug efficacy. PMID:26792885

  9. The local projection in the density functional theory plus U approach: A critical assessment.

    PubMed

    Wang, Yue-Chao; Chen, Ze-Hua; Jiang, Hong

    2016-04-14

    Density-functional theory plus the Hubbard U correction (DFT + U) method is widely used in first-principles studies of strongly correlated systems, as it can give qualitatively (and sometimes, semi-quantitatively) correct description of energetic and structural properties of many strongly correlated systems with similar computational cost as local density approximation or generalized gradient approximation. On the other hand, the DFT + U approach is limited both theoretically and practically in several important aspects. In particular, the results of DFT + U often depend on the choice of local orbitals (the local projection) defining the subspace in which the Hubbard U correction is applied. In this work we have systematically investigated the issue of the local projection by considering typical transition metal oxides, β-MnO2 and MnO, and comparing the results obtained from different implementations of DFT + U. We found that the choice of the local projection has significant effects on the DFT + U results, which are more significant for systems with stronger covalent bonding (e.g., MnO2) than those with more ionic bonding (e.g., MnO). These findings can help to clarify some confusion arising from the practical use of DFT + U and may also provide insights for the development of new first-principles approaches beyond DFT + U.

  10. [Treatment approach to localized esophageal cancer--what have we learned so far?].

    PubMed

    Brenner, Baruch; Purim, Ofer; Sulkes, Aaron

    2005-07-01

    The treatment of localized esophageal cancer (LEC) is under extensive debate. Treatment approaches include surgery or radiation alone, surgery with preoperative or postoperative radiation, preoperative or postoperative chemotherapy and definitive or preoperative chemoradiation. In fact, the type of therapy patients receive is often dependent on the actual medical field of the treating physician (surgery, oncology, etc.). The use of multiple treatment approaches toward LEC primarily reflects the scarcity of data that is derived from controlled randomized trials and the poor results of current therapies. In spite of the above, the cumulative data suggest that surgery and chemoradiation are the two treatment options in LEC and that their combined approach, i.e. preoperative chemoradiation, has not been proven to have a survival advantage over each one of these methods, and should therefore still be considered investigational. This article will review the available data on the various treatment approaches that are being used against LEC.

  11. A space–angle DGFEM approach for the Boltzmann radiation transport equation with local angular refinement

    SciTech Connect

    Kópházi, József Lathouwers, Danny

    2015-09-15

    In this paper a new method for the discretization of the radiation transport equation is presented, based on a discontinuous Galerkin method in space and angle that allows for local refinement in angle where any spatial element can support its own angular discretization. To cope with the discontinuous spatial nature of the solution, a generalized Riemann procedure is required to distinguish between incoming and outgoing contributions of the numerical fluxes. A new consistent framework is introduced that is based on the solution of a generalized eigenvalue problem. The resulting numerical fluxes for the various possible cases where neighboring elements have an equal, higher or lower level of refinement in angle are derived based on tensor algebra and the resulting expressions have a very clear physical interpretation. The choice of discontinuous trial functions not only has the advantage of easing local refinement, it also facilitates the use of efficient sweep-based solvers due to decoupling of unknowns on a large scale thereby approaching the efficiency of discrete ordinates methods with local angular resolution. The approach is illustrated by a series of numerical experiments. Results show high orders of convergence for the scalar flux on angular refinement. The generalized Riemann upwinding procedure leads to stable and consistent solutions. Further the sweep-based solver performs well when used as a preconditioner for a Krylov method.

  12. Skyrmions with vector mesons in the hidden local symmetry approach

    NASA Astrophysics Data System (ADS)

    Ma, Yong-Liang; Yang, Ghil-Seok; Oh, Yongseok; Harada, Masayasu

    2013-02-01

    The roles of light ρ and ω vector mesons in the Skyrmion are investigated in a chiral Lagrangian derived from the hidden local symmetry (HLS) up to O(p4) including the homogeneous Wess-Zumino terms. We write a general “master formula” that allows us to determine the parameters of the HLS Lagrangian from a class of holographic QCD models valid at the large-Nc and -λ (’t Hooft constant) limit by integrating out the infinite towers of vector and axial-vector mesons other than the lowest ρ and ω mesons. Within this approach we find that the physical properties of the Skyrmion as the solitonic description of baryons are independent of the HLS parameter a. Therefore the only parameters of the model are the pion decay constant and the vector-meson mass. Once determined in the meson sector, we have a totally parameter-free theory that allows us to study unequivocally the role of light vector mesons in the Skyrmion structure. We find, as suggested by Sutcliffe, that the inclusion of the ρ meson reduces the soliton mass, which makes the Skyrmion come closer to the Bogomol’nyi-Prasad-Sommerfield soliton, but the role of the ω meson is found to increase the soliton mass. In stark contrast, the Δ-N mass difference, which is determined by the moment of inertia in the adiabatic collective quantization of the Skyrmion, is increased by the ρ vector meson, while it is reduced by the inclusion of the ω meson. All these observations show the importance of the ω meson in the properties of the nucleon and nuclear matter in the Skyrme model.

  13. A Novel Local Learning based Approach With Application to Breast Cancer Diagnosis

    SciTech Connect

    Xu, Songhua; Tourassi, Georgia

    2012-01-01

    The purpose of this study is to develop and evaluate a novel local learning-based approach for computer-assisted diagnosis of breast cancer. Our new local learning based algorithm using the linear logistic regression method as its base learner is described. Overall, our algorithm will perform its stochastic searching process until the total allowed computing time is used up by our random walk process in identifying the most suitable population subdivision scheme and their corresponding individual base learners. The proposed local learning-based approach was applied for the prediction of breast cancer given 11 mammographic and clinical findings reported by physicians using the BI-RADS lexicon. Our database consisted of 850 patients with biopsy confirmed diagnosis (290 malignant and 560 benign). We also compared the performance of our method with a collection of publicly available state-of-the-art machine learning methods. Predictive performance for all classifiers was evaluated using 10-fold cross validation and Receiver Operating Characteristics (ROC) analysis. Figure 1 reports the performance of 54 machine learning methods implemented in the machine learning toolkit Weka (version 3.0). We introduced a novel local learning-based classifier and compared it with an extensive list of other classifiers for the problem of breast cancer diagnosis. Our experiments show that the algorithm superior prediction performance outperforming a wide range of other well established machine learning techniques. Our conclusion complements the existing understanding in the machine learning field that local learning may capture complicated, non-linear relationships exhibited by real-world datasets.

  14. An efficient local cascade defense method in complex networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhong-Yuan; Ma, Jian-Feng

    Cascading failures in networked systems often lead to catastrophic consequence. Defending cascading failure propagation by employing local load redistribution method is an efficient way. Given initial load of every node, the key of improving network robustness against cascading failures is to maximally defend cascade propagation with minimum total extra capacity of all nodes. With finite total extra capacity of all nodes, we first discuss three general extra capacity distributions including degree-based distribution (DD), average distribution (AD) and random distribution (RD). To sufficiently use the total spare capacity (SC) of all neighboring nodes of a failed node, then we propose a novel SC-based local load redistribution mechanism to improve the cascade defense ability of network. We investigate the network robustness against cascading failures induced by a single node failure under the three extra capacity distributions in both scale-free networks and random networks. Compared with the degree-based (DB) local load redistribution method, our SC method achieves higher robustness under all of the three extra capacity distributions. The extensive simulation results can well confirm the effectiveness of the SC local load redistribution method.

  15. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  16. Globalizing Education, Educating the Local: How Method Made Us Mad

    ERIC Educational Resources Information Center

    Edwards, Richard; Carney, Stephen; Ambrosius, Ulla; Lauder, Hugh

    2012-01-01

    This article presents the authors' review of "Globalizing education, educating the local: how method made us mad," by Ian Stronach. In the opening chapter of their highly influential 1997 book "Education Research Undone: The Postmodern Embrace," Ian Stronach and Maggie MacLure draw upon the work of Derrida to argue for…

  17. Automatic localization of pupil using eccentricity and iris using gradient based method

    NASA Astrophysics Data System (ADS)

    Khan, Tariq M.; Aurangzeb Khan, M.; Malik, Shahzad A.; Khan, Shahid A.; Bashir, Tariq; Dar, Amir H.

    2011-02-01

    This paper presents a novel approach for the automatic localization of pupil and iris. Pupil and iris are nearly circular regions, which are surrounded by sclera, eyelids and eyelashes. The localization of both pupil and iris is extremely important in any iris recognition system. In the proposed algorithm pupil is localized using Eccentricity based Bisection method which looks for the region that has the highest probability of having pupil. While iris localization is carried out in two steps. In the first step, iris image is directionally segmented and a noise free region (region of interest) is extracted. In the second step, angular lines in the region of interest are extracted and the edge points of iris outer boundary are found through the gradient of these lines. The proposed method is tested on CASIA ver 1.0 and MMU Iris databases. Experimental results show that this method is comparatively accurate.

  18. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  19. Optoelectronic scanning system upgrade by energy center localization methods

    NASA Astrophysics Data System (ADS)

    Flores-Fuentes, W.; Sergiyenko, O.; Rodriguez-Quiñonez, J. C.; Rivas-López, M.; Hernández-Balbuena, D.; Básaca-Preciado, L. C.; Lindner, L.; González-Navarro, F. F.

    2016-11-01

    A problem of upgrading an optoelectronic scanning system with digital post-processing of the signal based on adequate methods of energy center localization is considered. An improved dynamic triangulation analysis technique is proposed by an example of industrial infrastructure damage detection. A modification of our previously published method aimed at searching for the energy center of an optoelectronic signal is described. Application of the artificial intelligence algorithm of compensation for the error of determining the angular coordinate in calculating the spatial coordinate through dynamic triangulation is demonstrated. Five energy center localization methods are developed and tested to select the best method. After implementation of these methods, digital compensation for the measurement error, and statistical data analysis, a non-parametric behavior of the data is identified. The Wilcoxon signed rank test is applied to improve the result further. For optical scanning systems, it is necessary to detect a light emitter mounted on the infrastructure being investigated to calculate its spatial coordinate by the energy center localization method.

  20. A novel method for medical implant in-body localization.

    PubMed

    Pourhomayoun, Mohammad; Fowler, Mark; Jin, Zhanpeng

    2012-01-01

    Wireless communication medical implants are gaining an important role in healthcare systems by controlling and transmitting the vital information of the patients. Recently, Wireless Capsule Endoscopy (WCE) has become a popular method to visualize and diagnose the human gastrointestinal (GI) tract. Estimating the exact location of the capsule when each image is taken is a very critical issue in capsule endoscopy. Most of the common capsule localization methods are based on estimating one or more location-dependent signal parameters like TOA or RSS. However, some unique challenges exist for in-body localization due to the complex nature within the human body. In this paper, we propose a novel one-stage localization method based on spatial sparsity in 3D space. In this method, we directly estimate the location of the capsule (as the emitter) without going through the intermediate stage of TOA or signal strength estimation. We evaluate the performance of the proposed method using Monte Carlo simulation with an RF signal following the allowable power and bandwidth ranges according to the standards. The results show that the proposed method is very effective and accurate even in massive multipath and shadowing conditions.

  1. A Local Incident Flux Response Expansion Transport Method for Coupling to the Diffusion Method in Cylindrical Geometry

    SciTech Connect

    Dingkang Zhang; Farzad Rahnema; Abderrafi M. Ougouag

    2013-09-01

    A local incident flux response expansion transport method is developed to generate transport solutions for coupling to diffusion theory codes regardless of their solution method (e.g., fine mesh, nodal, response based, finite element, etc.) for reactor core calculations in both two-dimensional (2-D) and three-dimensional (3-D) cylindrical geometries. In this approach, a Monte Carlo method is first used to precompute the local transport solution (i.e., response function library) for each unique transport coarse node, in which diffusion theory is not valid due to strong transport effects. The response function library is then used to iteratively determine the albedo coefficients on the diffusion-transport interfaces, which are then used as the coupling parameters within the diffusion code. This interface coupling technique allows a seamless integration of the transport and diffusion methods. The new method retains the detailed heterogeneity of the transport nodes and naturally constructs any local solution within them by a simple superposition of local responses to all incoming fluxes from the contiguous coarse nodes. A new technique is also developed for coupling to fine-mesh diffusion methods/codes. The local transport method/module is tested in 2-D and 3-D pebble-bed reactor benchmark problems consisting of an inner reflector, an annular fuel region, and a controlled outer reflector. It is found that the results predicted by the transport module agree very well with the reference fluxes calculated directly by MCNP in both benchmark problems.

  2. Field theoretic approach to dynamical orbital localization in ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    Thomas, Jordan W.; Iftimie, Radu; Tuckerman, Mark E.

    2004-03-01

    Techniques from gauge-field theory are employed to derive an alternative formulation of the Car-Parrinello ab initio molecular-dynamics method that allows maximally localized Wannier orbitals to be generated dynamically as the calculation proceeds. In particular, the Car-Parrinello Lagrangian is mapped onto an SU(n) non-Abelian gauge-field theory and the fictitious kinetic energy in the Car-Parrinello Lagrangian is modified to yield a fully gauge-invariant form. The Dirac gauge-fixing method is then employed to derive a set of equations of motion that automatically maintain orbital locality by restricting the orbitals to remain in the “Wannier gauge.” An approximate algorithm for integrating the equations of motion that is stable and maintains orbital locality is then developed based on the exact equations of motion. It is shown in a realistic application (64 water molecules plus one hydrogen-chloride molecule in a periodic box) that orbital locality can be maintained with only a modest increase in CPU time. The ability to keep orbitals localized in an ab initio molecular-dynamics calculation is a crucial ingredient in the development of emerging linear scaling approaches.

  3. Teaching Local Lore in EFL Class: New Approaches

    ERIC Educational Resources Information Center

    Yarmakeev, Iskander E.; Pimenova, Tatiana S.; Zamaletdinova, Gulyusa R.

    2016-01-01

    This paper is dedicated to the up-to-date educational problem, that is, the role of local lore in teaching EFL to University students. Although many educators admit that local lore knowledge plays a great role in the development of a well-bred and well-educated personality and meets students' needs, the problem has not been thoroughly studied.…

  4. A stabilized, symmetric Nitsche method for spatially localized plasticity

    NASA Astrophysics Data System (ADS)

    Truster, Timothy J.

    2016-01-01

    A heterogeneous interface method is developed for combining primal displacement and mixed displacement-pressure formulations across nonconforming finite element meshes to treat volume-preserving plastic flow. When the zone of inelastic response is localized within a larger domain, significant computational savings can be achieved by confining the mixed formulation solely to the localized region. The method's distinguishing feature is that the coupling terms for joining dissimilar element types are derived from a time-discrete free energy functional, which is based on a Lagrange multiplier formulation of the interface constraints. Incorporating residual-based stabilizing terms at the interface enables the condensation of the multiplier field, leading to a symmetric Nitsche formulation in which the interface operators respect the differing character of the governing equations in each region. In a series of numerical problems, the heterogeneous interface method achieved comparable results on coarser meshes as those obtained from applying the mixed formulation throughout the domain.

  5. A Retrospective Approach to Testing the DNA Barcoding Method

    PubMed Central

    Chapple, David G.; Ritchie, Peter A.

    2013-01-01

    A decade ago, DNA barcoding was proposed as a standardised method for identifying existing species and speeding the discovery of new species. Yet, despite its numerous successes across a range of taxa, its frequent failures have brought into question its accuracy as a short-cut taxonomic method. We use a retrospective approach, applying the method to the classification of New Zealand skinks as it stood in 1977 (primarily based upon morphological characters), and compare it to the current taxonomy reached using both morphological and molecular approaches. For the 1977 dataset, DNA barcoding had moderate-high success in identifying specimens (78-98%), and correctly flagging specimens that have since been confirmed as distinct taxa (77-100%). But most matching methods failed to detect the species complexes that were present in 1977. For the current dataset, there was moderate-high success in identifying specimens (53-99%). For both datasets, the capacity to discover new species was dependent on the methodological approach used. Species delimitation in New Zealand skinks was hindered by the absence of either a local or global barcoding gap, a result of recent speciation events and hybridisation. Whilst DNA barcoding is potentially useful for specimen identification and species discovery in New Zealand skinks, its error rate could hinder the progress of documenting biodiversity in this group. We suggest that integrated taxonomic approaches are more effective at discovering and describing biodiversity. PMID:24244283

  6. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  7. Multiple Shooting-Local Linearization method for the identification of dynamical systems

    NASA Astrophysics Data System (ADS)

    Carbonell, F.; Iturria-Medina, Y.; Jimenez, J. C.

    2016-08-01

    The combination of the multiple shooting strategy with the generalized Gauss-Newton algorithm turns out in a recognized method for estimating parameters in ordinary differential equations (ODEs) from noisy discrete observations. A key issue for an efficient implementation of this method is the accurate integration of the ODE and the evaluation of the derivatives involved in the optimization algorithm. In this paper, we study the feasibility of the Local Linearization (LL) approach for the simultaneous numerical integration of the ODE and the evaluation of such derivatives. This integration approach results in a stable method for the accurate approximation of the derivatives with no more computational cost than that involved in the integration of the ODE. The numerical simulations show that the proposed Multiple Shooting-Local Linearization method recovers the true parameters value under different scenarios of noisy data.

  8. Developing an Integrated Approach for Local Urban Climate Models in London from Neighbourhood to Street Scale

    NASA Astrophysics Data System (ADS)

    Bakkali, M.; Davies, M.; Steadman, J. P.

    2012-04-01

    We currently have an incomplete understanding of how weather varies across London and how the city's microclimate will intensify levels of heat, cold and air pollution in the future. There is a need to target priority areas of the city and to promote design guidance on climate change mitigation strategies. As a result of improvements in the accuracy of local weather data in London, an opportunity is emerging for designers and planners of the built environment to measure the impact of their designs on local urban climate and to enhance the designer's role in creating more informed design choices at an urban micro-scale. However, modelling the different components of the urban environment separately and then collating and comparing the results invariably leads to discrepancies in the output of local urban climate modelling tools designed to work at different scales. Of particular interest is why marked differences appear between the data extracted from local urban climate models when we change the scale of modelling from city to building scale. An example of such differences is those that have been observed in relation to the London Unified Model and London Site Specific Air Temperature model. In order to avoid these discrepancies we need a method for understanding and assessing how the urban environment impacts on local urban climate as a whole. A step to achieving this is by developing inter-linkages between assessment tools. Accurate information on the net impact of the urban environment on the local urban climate will in turn facilitate more accurate predictions of future energy demand and realistic scenarios for comfort and health. This paper will present two key topographies of London's urban environment that influence local urban climate: land use and street canyons. It will look at the possibilities for developing an integrated approach to modelling London's local urban climate from the neighbourhood to the street scale.

  9. Development of a GIS method to localize critical source areas of diffuse nitrate pollution.

    PubMed

    Orlikowski, D; Bugey, A; Périllon, C; Julich, S; Guégain, C; Soyeux, E; Matzinger, A

    2011-01-01

    The present study aimed at developing a universal method for the localization of critical source areas (CSAs) of diffuse nitrate (NO3-) pollution in rural catchments with low data availability. Based on existing methods, land use, soil, slope, riparian buffer strips and distance to surface waters were identified as the most relevant indicator parameters for diffuse agricultural NO3- pollution. The five parameters were averaged in a GIS-overlay to localize areas with low, medium and high risk of NO3- pollution. A first application of the GIS approach to the Ic catchment in France, showed that identified CSAs were in good agreement with results from river monitoring and numerical modelling. Additionally, the GIS approach showed low sensitivity to single parameters, which makes it robust to varying data availability. As a result, the tested GIS-approach provides a promising, easy-to-use CSA identification concept, applicable for a wide range of rural catchments.

  10. In vitro bioequivalence approach for a locally acting gastrointestinal drug: lanthanum carbonate.

    PubMed

    Yang, Yongsheng; Shah, Rakhi B; Yu, Lawrence X; Khan, Mansoor A

    2013-02-04

    A conventional human pharmacokinetic (PK) in vivo study is often considered as the "gold standard" to determine bioequivalence (BE) of drug products. However, this BE approach is not always applicable to the products not intended to be delivered into the systemic circulation. For locally acting gastrointestinal (GI) products, well designed in vitro approaches might be more practical in that they are able not only to qualitatively predict the presence of the active substance at the site of action but also to specifically assess the performance of the active substance. For example, lanthanum carbonate chewable tablet, a locally acting GI phosphate binder when orally administrated, can release free lanthanum ions in the acid environment of the upper GI tract. The lanthanum ions directly reach the site of action to bind with dietary phosphate released from food to form highly insoluble lanthanum-phosphate complexes. This prevents the absorption of phosphate consequently reducing the serum phosphate. Thus, using a conventional PK approach to demonstrate BE is meaningless since plasma levels are not relevant for local efficacy in the GI tract. Additionally the bioavailability of lanthanum carbonate is less than 0.002%, and therefore, the PK approach is not feasible. Therefore, an alternative assessment method is required. This paper presents an in vitro approach that can be used in lieu of PK or clinical studies to determine the BE of lanthanum carbonate chewable tablets. It is hoped that this information can be used to finalize an in vitro guidance for BE studies of lanthanum carbonate chewable tablets as well as to assist with "in vivo" biowaiver decision making. The scientific information might be useful to the pharmaceutical industry for the purpose of planning and designing future BE studies.

  11. Local and Global Gestalt Laws: A Neurally Based Spectral Approach.

    PubMed

    Favali, Marta; Citti, Giovanna; Sarti, Alessandro

    2017-02-01

    This letter presents a mathematical model of figure-ground articulation that takes into account both local and global gestalt laws and is compatible with the functional architecture of the primary visual cortex (V1). The local gestalt law of good continuation is described by means of suitable connectivity kernels that are derived from Lie group theory and quantitatively compared with long-range connectivity in V1. Global gestalt constraints are then introduced in terms of spectral analysis of a connectivity matrix derived from these kernels. This analysis performs grouping of local features and individuates perceptual units with the highest salience. Numerical simulations are performed, and results are obtained by applying the technique to a number of stimuli.

  12. System and method for bullet tracking and shooter localization

    DOEpatents

    Roberts, Randy S.; Breitfeller, Eric F.

    2011-06-21

    A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.

  13. Mapping the Similarities of Spectra: Global and Locally-biased Approaches to SDSS Galaxies

    NASA Astrophysics Data System (ADS)

    Lawlor, David; Budavári, Tamás; Mahoney, Michael W.

    2016-12-01

    We present a novel approach to studying the diversity of galaxies. It is based on a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors. Our method introduces new coordinates that summarize an entire spectrum, similar to but going well beyond the widely used Principal Component Analysis (PCA). Unlike PCA, however, this technique does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity. Instead, we relax that condition to only the most similar spectra, and we show that doing so yields more reliable results for many astronomical questions of interest. The global variant of our approach can identify very finely numerous astronomical phenomena of interest. The locally-biased variants of our basic approach enable us to explore subtle trends around a set of chosen objects. The power of the method is demonstrated in the Sloan Digital Sky Survey Main Galaxy Sample, by illustrating that the derived spectral coordinates carry an unprecedented amount of information.

  14. A New Approach for Copy-Move Detection Based on Improved Weber Local Descriptor.

    PubMed

    Saadat, Shabnam; Moghaddam, Mohsen Ebrahimi; Mohammadi, Mohsen

    2015-11-01

    One of the most common image tampering techniques is copy-move; in this technique, one or more parts of the image are copied and pasted in another area of the image. Recently, various methods have been proposed for copy-move detection; however, many of these techniques are not robust to additional changes like geometric transformation, and they are failed to be useful for detecting small copied areas. In this paper, a new method based on point descriptors which are derived from the integration of textural feature-based Weber law and statistical features of the image is presented. In this proposed approach, modified multiscale version of Weber local descriptor is presented to make the method robust versus geometric transformation and detect small copied areas. The results of the experiments showed that our method can detect small copied areas and copy-move tampered images which are influenced by rotation, scaling, noise addition, compression, blurring, and mirroring.

  15. Moving sound source localization based on triangulation method

    NASA Astrophysics Data System (ADS)

    Miao, Feng; Yang, Diange; Wen, Junjie; Lian, Xiaomin

    2016-12-01

    This study develops a sound source localization method that extends traditional triangulation to moving sources. First, the possible sound source locating plane is scanned. Secondly, for each hypothetical source location in this possible plane, the Doppler effect is removed through the integration of sound pressure. Taking advantage of the de-Dopplerized signals, the moving time difference of arrival (MTDOA) is calculated, and the sound source is located based on triangulation. Thirdly, the estimated sound source location is compared to the original hypothetical location and the deviations are recorded. Because the real sound source location leads to zero deviation, the sound source can be finally located by minimizing the deviation matrix. Simulations have shown the superiority of MTDOA method over traditional triangulation in case of moving sound sources. The MTDOA method can be used to locate moving sound sources with as high resolution as DAMAS beamforming, as shown in the experiments, offering thus a new method for locating moving sound sources.

  16. Exploring Local Approaches to Communicating Global Climate Change Information

    NASA Astrophysics Data System (ADS)

    Stevermer, A. J.

    2002-12-01

    Expected future climate changes are often presented as a global problem, requiring a global solution. Although this statement is accurate, communicating climate change science and prospective solutions must begin at local levels, each with its own subset of complexities to be addressed. Scientific evaluation of local changes can be complicated by large variability occurring over small spatial scales; this variability hinders efforts both to analyze past local changes and to project future ones. The situation is further encumbered by challenges associated with scientific literacy in the U.S., as well as by pressing economic difficulties. For people facing real-life financial and other uncertainties, a projected ``1.4 to 5.8 degrees Celsius'' rise in global temperature is likely to remain only an abstract concept. Despite this lack of concreteness, recent surveys have found that most U.S. residents believe current global warming science, and an even greater number view the prospect of increased warming as at least a ``somewhat serious'' problem. People will often be able to speak of long-term climate changes in their area, whether observed changes in the amount of snow cover in winter, or in the duration of extreme heat periods in summer. This work will explore the benefits and difficulties of communicating climate change from a local, rather than global, perspective, and seek out possible strategies for making less abstract, more concrete, and most importantly, more understandable information available to the public.

  17. Locally advanced rectal cancer: the importance of a multidisciplinary approach.

    PubMed

    Berardi, Rossana; Maccaroni, Elena; Onofri, Azzurra; Morgese, Francesca; Torniai, Mariangela; Tiberi, Michela; Ferrini, Consuelo; Cascinu, Stefano

    2014-12-14

    Rectal cancer accounts for a relevant part of colorectal cancer cases, with a mortality of 4-10/100000 per year. The development of locoregional recurrences and the occurrence of distant metastases both influences the prognosis of these patients. In the last two decades, new multimodality strategies have improved the prognosis of locally advanced rectal cancer with a significant reduction of local relapse and an increase in terms of overall survival. Radical surgery still remains the principal curative treatment and the introduction of total mesorectal excision has significantly achieved a reduction in terms of local recurrence rates. The employment of neoadjuvant treatment, delivered before surgery, also achieved an improved local control and an increased sphincter preservation rate in low-lying tumors, with an acceptable acute and late toxicity. This review describes the multidisciplinary management of rectal cancer, focusing on the effectiveness of neoadjuvant chemoradiotherapy and of post-operative adjuvant chemotherapy both in the standard combined modality treatment programs and in the ongoing research to improve these regimens.

  18. Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.

    PubMed

    Carrascal, A; Manrique, D; Ríos, J; Rossi, C

    2003-01-01

    This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.

  19. Local Authority Approaches to the School Admissions Process. LG Group Research Report

    ERIC Educational Resources Information Center

    Rudd, Peter; Gardiner, Clare; Marson-Smith, Helen

    2010-01-01

    What are the challenges, barriers and facilitating factors connected to the various school admissions approaches used by local authorities? This report gathers the views of local authority admissions officers on the strengths and weaknesses of different approaches, as well as the issues and challenges they face in this important area. It covers:…

  20. An improved method for localizing electric brain dipoles.

    PubMed

    Salu, Y; Cohen, L G; Rose, D; Sato, S; Kufta, C; Hallett, M

    1990-07-01

    Methods for localizing electrical dipolar sources in the brain differ from one another by the models they use to represent the head, the specific formulas used in the calculation of the scalp potentials, the way that the reference electrode is treated, and by the algorithm employed to find the least-squares fit between the measured and calculated EEG potentials. The model presented here is based on some of the most advanced features found in other models, and on some improvements. The head is represented by a three-layer spherical model. The potential on any point on the scalp due to any source is found by a closed formula, which is not based on matrix rotations. The formulas will accept any surface electrode as the reference electrode. The least-squares procedure is based on optimal dipoles, reducing the number of unknowns in the iterations from six to three. The new method was evaluated by localizing five implanted dipolar sources in human sensorimotor cortex. The distances between the locations of the sources as calculated by the method, and the actual locations were between 0.4 and 2.0 cm. The sensitivity of the method to uncertainties encountered whenever a real head has to be modeled by a three-layer model has also been assessed.

  1. Russian risk assessment methods and approaches

    SciTech Connect

    Dvorack, M.A.; Carlson, D.D.; Smith, R.E.

    1996-07-01

    One of the benefits resulting from the collapse of the Soviet Union is the increased dialogue currently taking place between American and Russian nuclear weapons scientists in various technical arenas. One of these arenas currently being investigated involves collaborative studies which illustrate how risk assessment is perceived and utilized in the Former Soviet Union (FSU). The collaborative studies indicate that, while similarities exist with respect to some methodologies, the assumptions and approaches in performing risk assessments were, and still are, somewhat different in the FSU as opposed to that in the US. The purpose of this paper is to highlight the present knowledge of risk assessment methodologies and philosophies within the two largest nuclear weapons laboratories of the Former Soviet Union, Arzamas-16 and Chelyabinsk-70. Furthermore, This paper will address the relative progress of new risk assessment methodologies, such as Fuzzy Logic, within the framework of current risk assessment methods at these two institutes.

  2. Subjective comparison of brightness preservation methods for local backlight dimming displays

    NASA Astrophysics Data System (ADS)

    Korhonen, J.; Mantel, C.; Forchhammer, S.

    2015-01-01

    Local backlight dimming is a popular technology in high quality Liquid Crystal Displays (LCDs). In those displays, the backlight is composed of contributions from several individually adjustable backlight segments, set at different backlight luminance levels in different parts of the screen, according to the luma of the target image displayed on LCD. Typically, transmittance of the liquid crystal cells (pixels) located in the regions with dimmed backlight is increased in order to preserve their relative brightness with respect to the pixels located in the regions with bright backlight. There are different methods for brightness preservation for local backlight dimming displays, producing images with different visual characteristics. In this study, we have implemented, analyzed and evaluated several different approaches for brightness preservation, and conducted a subjective study based on rank ordering to compare the relevant methods on a real-life LCD with a local backlight dimming capability. In general, our results show that locally adapted brightness preservation methods produce more preferred visual outcome than global methods, but dependency on the content is also observed. Based on the results, guidelines for selecting the perceptually preferred brightness preservation method for local backlight dimming displays are outlined.

  3. A global/local analysis method for treating details in structural design

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad A.; Mccleary, Susan L.; Ransom, Jonathan B.

    1993-01-01

    A method for analyzing global/local behavior of plate and shell structures is described. In this approach, a detailed finite element model of the local region is incorporated within a coarser global finite element model. The local model need not be nodally compatible (i.e., need not have a one-to-one nodal correspondence) with the global model at their common boundary; therefore, the two models may be constructed independently. The nodal incompatibility of the models is accounted for by introducing appropriate constraint conditions into the potential energy in a hybrid variational formulation. The primary advantage of this method is that the need for transition modeling between global and local models is eliminated. Eliminating transition modeling has two benefits. First, modeling efforts are reduced since tedious and complex transitioning need not be performed. Second, errors due to the mesh distortion, often unavoidable in mesh transitioning, are minimized by avoiding distorted elements beyond what is needed to represent the geometry of the component. The method is applied reduced to a plate loaded in tension and transverse bending. The plate has a central hole, and various hole sixes and shapes are studied. The method is also applied to a composite laminated fuselage panel with a crack emanating from a window in the panel. While this method is applied herein to global/local problems, it is also applicable to the coupled analysis of independently modeled components as well as adaptive refinement.

  4. Application of advanced reliability methods to local strain fatigue analysis

    NASA Technical Reports Server (NTRS)

    Wu, T. T.; Wirsching, P. H.

    1983-01-01

    When design factors are considered as random variables and the failure condition cannot be expressed by a closed form algebraic inequality, computations of risk (or probability of failure) might become extremely difficult or very inefficient. This study suggests using a simple, and easily constructed, second degree polynomial to approximate the complicated limit state in the neighborhood of the design point; a computer analysis relates the design variables at selected points. Then a fast probability integration technique (i.e., the Rackwitz-Fiessler algorithm) can be used to estimate risk. The capability of the proposed method is demonstrated in an example of a low cycle fatigue problem for which a computer analysis is required to perform local strain analysis to relate the design variables. A comparison of the performance of this method is made with a far more costly Monte Carlo solution. Agreement of the proposed method with Monte Carlo is considered to be good.

  5. 3-D Localization Method for a Magnetically Actuated Soft Capsule Endoscope and Its Applications

    PubMed Central

    Yim, Sehyuk; Sitti, Metin

    2014-01-01

    In this paper, we present a 3-D localization method for a magnetically actuated soft capsule endoscope (MASCE). The proposed localization scheme consists of three steps. First, MASCE is oriented to be coaxially aligned with an external permanent magnet (EPM). Second, MASCE is axially contracted by the enhanced magnetic attraction of the approaching EPM. Third, MASCE recovers its initial shape by the retracting EPM as the magnetic attraction weakens. The combination of the estimated direction in the coaxial alignment step and the estimated distance in the shape deformation (recovery) step provides the position of MASCE in 3-D. It is experimentally shown that the proposed localization method could provide 2.0–3.7 mm of distance error in 3-D. This study also introduces two new applications of the proposed localization method. First, based on the trace of contact points between the MASCE and the surface of the stomach, the 3-D geometrical model of a synthetic stomach was reconstructed. Next, the relative tissue compliance at each local contact point in the stomach was characterized by measuring the local tissue deformation at each point due to the preloading force. Finally, the characterized relative tissue compliance parameter was mapped onto the geometrical model of the stomach toward future use in disease diagnosis. PMID:25383064

  6. Designing and Evaluating Bamboo Harvesting Methods for Local Needs: Integrating Local Ecological Knowledge and Science

    NASA Astrophysics Data System (ADS)

    Darabant, András; Rai, Prem Bahadur; Staudhammer, Christina Lynn; Dorji, Tshewang

    2016-08-01

    Dendrocalamus hamiltonii, a large, clump-forming bamboo, has great potential to contribute towards poverty alleviation efforts across its distributional range. Harvesting methods that maximize yield while they fulfill local objectives and ensure sustainability are a research priority. Documenting local ecological knowledge on the species and identifying local users' goals for its production, we defined three harvesting treatments (selective cut, horseshoe cut, clear cut) and experimentally compared them with a no-intervention control treatment in an action research framework. We implemented harvesting over three seasons and monitored annually and two years post-treatment. Even though the total number of culms positively influenced the number of shoots regenerated, a much stronger relationship was detected between the number of culms harvested and the number of shoots regenerated, indicating compensatory growth mechanisms to guide shoot regeneration. Shoot recruitment declined over time in all treatments as well as the control; however, there was no difference among harvest treatments. Culm recruitment declined with an increase in harvesting intensity. When univariately assessing the number of harvested culms and shoots, there were no differences among treatments. However, multivariate analyses simultaneously considering both variables showed that harvested output of shoots and culms was higher with clear cut and horseshoe cut as compared to selective cut. Given the ease of implementation and issues of work safety, users preferred the horseshoe cut, but the lack of sustainability of shoot production calls for investigating longer cutting cycles.

  7. An adaptive locally linear embedding manifold learning approach for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.

    2015-05-01

    Algorithms for spectral analysis commonly use parametric or linear models of the data. Research has shown, however, that hyperspectral data -- particularly in materially cluttered scenes -- are not always well-modeled by statistical or linear methods. Here, we propose an approach to hyperspectral target detection that is based on a graph theory model of the data and a manifold learning transformation. An adaptive nearest neighbor (ANN) graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation. The artificial target manifold helps to guide the separation of the target data from the background data in the new, transformed manifold coordinates. Then, target detection is performed in the manifold space using Spectral Angle Mapper. This methodology is an improvement over previous iterations of this approach due to the incorporation of ANN, the artificial target manifold, and the choice of detector in the transformed space. We implement our approach in a spatially local way: the image is delineated into square tiles, and the detection maps are normalized across the entire image. Target detection results will be shown using laboratory-measured and scene-derived target data from the SHARE 2012 collect.

  8. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  9. Fast approach to evaluate map reconstruction for lesion detection and localization

    SciTech Connect

    Qi, Jinyi; Huesman, Ronald H.

    2004-02-01

    Lesion detection is an important task in emission tomography. Localization ROC (LROC) studies are often used to analyze the lesion detection and localization performance. Most researchers rely on Monte Carlo reconstruction samples to obtain LROC curves, which can be very time-consuming for iterative algorithms. In this paper we develop a fast approach to obtain LROC curves that does not require Monte Carlo reconstructions. We use a channelized Hotelling observer model to search for lesions, and the results can be easily extended to other numerical observers. We theoretically analyzed the mean and covariance of the observer output. Assuming the observer outputs are multivariate Gaussian random variables, an LROC curve can be directly generated by integrating the conditional probability density functions. The high-dimensional integrals are calculated using a Monte Carlo method. The proposed approach is very fast because no iterative reconstruction is involved. Computer simulations show that the results of the proposed method match well with those obtained using the tradition LROC analysis.

  10. Method for localizing and isolating an errant process step

    DOEpatents

    Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.

    2003-01-01

    A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.

  11. New orbit correction method uniting global and local orbit corrections

    NASA Astrophysics Data System (ADS)

    Nakamura, N.; Takaki, H.; Sakai, H.; Satoh, M.; Harada, K.; Kamiya, Y.

    2006-01-01

    A new orbit correction method, called the eigenvector method with constraints (EVC), is proposed and formulated to unite global and local orbit corrections for ring accelerators, especially synchrotron radiation(SR) sources. The EVC can exactly correct the beam positions at arbitrarily selected ring positions such as light source points, simultaneously reducing closed orbit distortion (COD) around the whole ring. Computer simulations clearly demonstrate these features of the EVC for both cases of the Super-SOR light source and the Advanced Light Source (ALS) that have typical structures of high-brilliance SR sources. In addition, the effects of errors in beam position monitor (BPM) reading and steering magnet setting on the orbit correction are analytically expressed and also compared with the computer simulations. Simulation results show that the EVC is very effective and useful for orbit correction and beam position stabilization in SR sources.

  12. Approaching nanoscale oxides: models and theoretical methods.

    PubMed

    Bromley, Stefan T; Moreira, Ibério de P R; Neyman, Konstantin M; Illas, Francesc

    2009-09-01

    This tutorial review deals with the rapidly developing area of modelling oxide materials at the nanoscale. Top-down and bottom-up modelling approaches and currently used theoretical methods are discussed with the help of a selection of case studies. We show that the critical oxide nanoparticle size required to be beyond the scale where every atom counts to where structural and chemical properties are essentially bulk-like (the scalable regime) strongly depends on the structural and chemical parameters of the material under consideration. This oxide-dependent behaviour with respect to size has fundamental implications with respect to their modelling. Strongly ionic materials such as MgO and CeO(2), for example, start to exhibit scalable-to-bulk crystallite-like characteristics for nanoparticles consisting of about 100 ions. For such systems there exists an overlap in nanoparticle size where both top-down and bottom-up theoretical techniques can be applied and the main problem is the choice of the most suitable computational method. However, for more covalent systems such TiO(2) or SiO(2) the onset of the scalable regime is still unclear and for intermediate sized nanoparticles there exists a gap where neither bottom-up nor top-down modelling are fully adequate. In such difficult cases new efforts to design adequate models are required. Further exacerbating these fundamental methodological concerns are oxide nanosystems exhibiting complex electronic and magnetic behaviour. Due to the need for a simultaneous accurate treatment of the atomistic, electronic and spin degrees of freedom for such systems, the top-down vs. bottom-up separation is still large, and only few studies currently exist.

  13. Well-conditioning global-local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felicio Bruzzi

    2016-11-01

    Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.

  14. Local problems; local solutions: an innovative approach to investigating and addressing causes of maternal deaths in Zambia's Copperbelt

    PubMed Central

    2011-01-01

    Background Maternal mortality in developing countries is high and international targets for reduction are unlikely to be met. Zambia's maternal mortality ratio was 591 per 100,000 live births according to survey data (2007) while routinely collected data captured only about 10% of these deaths. In one district in Zambia medical staff reviewed deaths occurring in the labour ward but no related recommendations were documented nor was there evidence of actions taken to avert further deaths. The Investigate Maternal Deaths and Act (IMDA) approach was designed to address these deficiencies and is comprised of four components; identification of maternal deaths; investigation of factors contributing to the deaths; recommendations for action drawn up by multiple stakeholders and monitoring of progress through existing systems. Methods A pilot was conducted in one district of Zambia. Maternal deaths occurring over a period of twelve months were identified and investigated. Data was collected through in-depth interviews with family, focus group discussions and hospital records. The information was summarized and presented at eleven data sharing meetings to key decision makers, during which recommendations for action were drawn up. An output indicator to monitor progress was included in the routine performance assessment tool. High impact interventions were identified using frequency analysis. Results A total of 56 maternal deaths were investigated. Poor communication, existing risk factors, a lack of resources and case management issues were the broad categories under which contributing factors were assigned. Sixty three recommendations were drawn up by key decision-makers of which two thirds were implemented by the end of the pilot period. Potential high impact actions were related to management of AIDS and pregnancy, human resources, referral mechanisms, birth planning at household level and availability of safe blood. Conclusion In resource constrained settings the IMDA

  15. Intraoperative methods to stage and localize pancreatic and duodenal tumors.

    PubMed

    Norton, J A

    1999-01-01

    Intraoperative methods to stage and localize tumors have dramatically improved. Advances include less invasive methods to obtain comparable results and precise localization of previously occult tumors. The use of new technology including laparoscopy and ultrasound has provided some of these advances, while improved operative techniques have provided others. Laparoscopy with ultrasound has allowed for improved staging of patients with pancreatic cancer and exclusion of patients who are not resectable for cure. We performed laparoscopy with ultrasound on 50 consecutive patients with adenocarcinoma of the pancreas or liver who appeared to have resectable tumors based on preoperative computed tomography. 22 patients (44%) were found to be unresectable because of tumor nodules on the liver and/or peritoneal surfaces or unsuspected distant nodal or liver metastases. The site of disease making the patient unresectable was confirmed by biopsy in each case. Of the 28 remaining patients in whom laparoscopic ultrasound predicted to be resectable for cure, 26 (93%) had all tumor removed. Thus laparoscopy with ultrasound was the best method to select patients for curative surgery. Intraoperative ultrasound (IOUS) has been a critical method to identify insulinomas that are not palpable. Nonpalpable tumors are most commonly in the pancreatic head. Because the pancreatic head is thick and insulinomas are small, of 9 pancreatic head insulinomas only 3 (33%) were palpable. However, IOUS precisely identified each (100%). Others have recommended blind distal pancreatectomy for individuals with insulinoma in whom no tumor can be identified. However, our data suggest that this procedure is contraindicated as these occult tumors are usually within the pancreatic head. Recent series suggest that previously missed gastrinomas are commonly in the duodenum. IOUS is not able to identify these tumors, but other methods can. Of 27 patients with 31 duodenal gastrinomas, palpation identified 19

  16. RFMix: a discriminative modeling approach for rapid and robust local-ancestry inference.

    PubMed

    Maples, Brian K; Gravel, Simon; Kenny, Eimear E; Bustamante, Carlos D

    2013-08-08

    Local-ancestry inference is an important step in the genetic analysis of fully sequenced human genomes. Current methods can only detect continental-level ancestry (i.e., European versus African versus Asian) accurately even when using millions of markers. Here, we present RFMix, a powerful discriminative modeling approach that is faster (~30×) and more accurate than existing methods. We accomplish this by using a conditional random field parameterized by random forests trained on reference panels. RFMix is capable of learning from the admixed samples themselves to boost performance and autocorrect phasing errors. RFMix shows high sensitivity and specificity in simulated Hispanics/Latinos and African Americans and admixed Europeans, Africans, and Asians. Finally, we demonstrate that African Americans in HapMap contain modest (but nonzero) levels of Native American ancestry (~0.4%).

  17. An automatic locally-adaptive method to estimate heavily-tailed breakthrough curves from particle distributions

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Fernàndez-Garcia, Daniel

    2013-09-01

    Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .

  18. Evaluation of geospatial methods to generate subnational HIV prevalence estimates for local level planning

    PubMed Central

    2016-01-01

    Objective: There is evidence of substantial subnational variation in the HIV epidemic. However, robust spatial HIV data are often only available at high levels of geographic aggregation and not at the finer resolution needed for decision making. Therefore, spatial analysis methods that leverage available data to provide local estimates of HIV prevalence may be useful. Such methods exist but have not been formally compared when applied to HIV. Design/methods: Six candidate methods – including those used by the Joint United Nations Programme on HIV/AIDS to generate maps and a Bayesian geostatistical approach applied to other diseases – were used to generate maps and subnational estimates of HIV prevalence across three countries using cluster level data from household surveys. Two approaches were used to assess the accuracy of predictions: internal validation, whereby a proportion of input data is held back (test dataset) to challenge predictions; and comparison with location-specific data from household surveys in earlier years. Results: Each of the methods can generate usefully accurate predictions of prevalence at unsampled locations, with the magnitude of the error in predictions similar across approaches. However, the Bayesian geostatistical approach consistently gave marginally the strongest statistical performance across countries and validation procedures. Conclusions: Available methods may be able to furnish estimates of HIV prevalence at finer spatial scales than the data currently allow. The subnational variation revealed can be integrated into planning to ensure responsiveness to the spatial features of the epidemic. The Bayesian geostatistical approach is a promising strategy for integrating HIV data to generate robust local estimates. PMID:26919737

  19. Local Bathymetry Estimation Using Variational Inverse Modeling: A Nested Approach

    NASA Astrophysics Data System (ADS)

    Almeida, T. G.; Walker, D. T.; Farquharson, G.

    2014-12-01

    Estimation of subreach river bathymetry from remotely-sensed surface velocity data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs). A nested approach is adopted to focus on obtaining an accurate estimate of bathymetry over a small region of interest within a larger complex hydrodynamic system. This approach reduces computational cost significantly. We begin by constructing a minimization problem with a cost function defined by the error between observed and estimated surface velocities, and then apply the SWEs as a constraint on the velocity field. An adjoint SWE model is developed through the use of Lagrange multipliers, converting the unconstrained minimization problem into a constrained one. The adjoint model solution is used to calculate the gradient of the cost function with respect to bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In this application of the algorithm, the 2D depth-averaged flow is computed within a nested framework using Delft3D-FLOW as the forward computational model. First, an outer simulation is generated using discharge rate and other measurements from USGS and NOAA, assuming a uniform bottom-friction coefficient. Then a nested, higher resolution inner model is constructed using open boundary condition data interpolated from the outer model (see figure). Riemann boundary conditions with specified tangential velocities are utilized to ensure a near seamless transition between outer and inner model results. The initial guess bathymetry matches the outer model bathymetry, and the iterative assimilation procedure is used to adjust the bathymetry only for the inner model. The observation data was collected during the ONR Rivet II field exercise for the mouth of the Columbia River near Hammond, OR. A dual beam squinted along-track-interferometric, synthetic

  20. A Challenging Surgical Approach to Locally Advanced Primary Urethral Carcinoma

    PubMed Central

    Lucarelli, Giuseppe; Spilotros, Marco; Vavallo, Antonio; Palazzo, Silvano; Miacola, Carlos; Forte, Saverio; Matera, Matteo; Campagna, Marcello; Colamonico, Ottavio; Schiralli, Francesco; Sebastiani, Francesco; Di Cosmo, Federica; Bettocchi, Carlo; Di Lorenzo, Giuseppe; Buonerba, Carlo; Vincenti, Leonardo; Ludovico, Giuseppe; Ditonno, Pasquale; Battaglia, Michele

    2016-01-01

    Abstract Primary urethral carcinoma (PUC) is a rare and aggressive cancer, often underdetected and consequently unsatisfactorily treated. We report a case of advanced PUC, surgically treated with combined approaches. A 47-year-old man underwent transurethral resection of a urethral lesion with histological evidence of a poorly differentiated squamous cancer of the bulbomembranous urethra. Computed tomography (CT) and bone scans excluded metastatic spread of the disease but showed involvement of both corpora cavernosa (cT3N0M0). A radical surgical approach was advised, but the patient refused this and opted for chemotherapy. After 17 months the patient was referred to our department due to the evidence of a fistula in the scrotal area. CT scan showed bilateral metastatic disease in the inguinal, external iliac, and obturator lymph nodes as well as the involvement of both corpora cavernosa. Additionally, a fistula originating from the right corpus cavernosum extended to the scrotal skin. At this stage, the patient accepted the surgical treatment, consisting of different phases. Phase I: Radical extraperitoneal cystoprostatectomy with iliac-obturator lymph nodes dissection. Phase II: Creation of a urinary diversion through a Bricker ileal conduit. Phase III: Repositioning of the patient in lithotomic position for an overturned Y skin incision, total penectomy, fistula excision, and “en bloc” removal of surgical specimens including the bladder, through the perineal breach. Phase IV: Right inguinal lymphadenectomy. The procedure lasted 9-and-a-half hours, was complication-free, and intraoperative blood loss was 600 mL. The patient was discharged 8 days after surgery. Pathological examination documented a T4N2M0 tumor. The clinical situation was stable during the first 3 months postoperatively but then metastatic spread occurred, not responsive to adjuvant chemotherapy, which led to the patient's death 6 months after surgery. Patients with advanced stage tumors of

  1. A hybrid approach to protein folding problem integrating constraint programming with local search

    PubMed Central

    2010-01-01

    Background The protein folding problem remains one of the most challenging open problems in computational biology. Simplified models in terms of lattice structure and energy function have been proposed to ease the computational hardness of this optimization problem. Heuristic search algorithms and constraint programming are two common techniques to approach this problem. The present study introduces a novel hybrid approach to simulate the protein folding problem using constraint programming technique integrated within local search. Results Using the face-centered-cubic lattice model and 20 amino acid pairwise interactions energy function for the protein folding problem, a constraint programming technique has been applied to generate the neighbourhood conformations that are to be used in generic local search procedure. Experiments have been conducted for a few small and medium sized proteins. Results have been compared with both pure constraint programming approach and local search using well-established local move set. Substantial improvements have been observed in terms of final energy values within acceptable runtime using the hybrid approach. Conclusion Constraint programming approaches usually provide optimal results but become slow as the problem size grows. Local search approaches are usually faster but do not guarantee optimal solutions and tend to stuck in local minima. The encouraging results obtained on the small proteins show that these two approaches can be combined efficiently to obtain better quality solutions within acceptable time. It also encourages future researchers on adopting hybrid techniques to solve other hard optimization problems. PMID:20122212

  2. Characterization of peak flow events with local singularity method

    NASA Astrophysics Data System (ADS)

    Cheng, Q.; Li, L.; Wang, L.

    2009-07-01

    Three methods, return period, power-law frequency plot (concentration-area) and local singularity index, are introduced in the paper for characterizing peak flow events from river flow data for the past 100 years from 1900 to 2000 recorded at 25 selected gauging stations on rivers in the Oak Ridges Moraine (ORM) area, Canada. First a traditional method, return period, was applied to the maximum annual river flow data. Whereas the Pearson III distribution generally fits the values, a power-law frequency plot (C-A) on the basis of self-similarity principle provides an effective mean for distinguishing "extremely" large flow events from the regular flow events. While the latter show a power-law distribution, about 10 large flow events manifest departure from the power-law distribution and these flow events can be classified into a separate group most of which are related to flood events. It is shown that the relation between the average water releases over a time period after flow peak and the time duration may follow a power-law distribution. The exponent of the power-law or singularity index estimated from this power-law relation may be used to characterize non-linearity of peak flow recessions. Viewing large peak flow events or floods as singular processes can anticipate the application of power-law models not only for characterizing the frequency distribution of peak flow events, for example, power-law relation between the number and size of floods, but also for describing local singularity of processes such as power-law relation between the amount of water released versus releasing time. With the introduction and validation of singularity of peak flow events, alternative power-law models can be used to depict the recession property as well as other types of non-linear properties.

  3. Noninvasive localization of electromagnetic epileptic activity. I. Method descriptions and simulations.

    PubMed

    Grave de Peralta Menendez, R; Gonzalez Andino, S; Lantz, G; Michel, C M; Landis, T

    2001-01-01

    This paper considers the solution of the bioelectromagnetic inverse problem with particular emphasis on focal compact sources that are likely to arise in epileptic data. Two linear inverse methods are proposed and evaluated in simulations. The first method belongs to the class of distributed inverse solutions, capable of dealing with multiple simultaneously active sources. This solution is based on a Local Auto Regressive Average (LAURA) model. Since no assumption is made about the number of activated sources, this approach can be applied to data with multiple sources. The second method, EPIFOCUS, assumes that there is only a single focal source. However, in contrast to the single dipole model, it allows the source to have a spatial extent beyond a single point and avoids the non-linear optimization process required by dipole fitting. The performance of both methods is evaluated with synthetic data in noisy and noise free conditions. The simulation results demonstrate that LAURA and EPIFOCUS increase the number of sources retrieved with zero dipole localization error and produce lower maximum error and lower average error compared to Minimum Norm, Weighted Minimum Norm and Minimum Laplacian (LORETA). The results show that EPIFOCUS is a robust and powerful tool to localize focal sources. Alternatives to localize data generated by multiple sources are discussed. A companion paper (Lantz et al. 2001, this issue) illustrates the application of LAURA and EPIFOCUS to the analysis of interictal data in epileptic patients.

  4. Periodic local MP2 method employing orbital specific virtuals

    NASA Astrophysics Data System (ADS)

    Usvyat, Denis; Maschio, Lorenzo; Schütz, Martin

    2015-09-01

    We introduce orbital specific virtuals (OSVs) to represent the truncated pair-specific virtual space in periodic local Møller-Plesset perturbation theory of second order (LMP2). The OSVs are constructed by diagonalization of the LMP2 amplitude matrices which correspond to diagonal Wannier-function (WF) pairs. Only a subset of these OSVs is adopted for the subsequent OSV-LMP2 calculation, namely, those with largest contribution to the diagonal pair correlation energy and with the accumulated value of these contributions reaching a certain accuracy. The virtual space for a general (non diagonal) pair is spanned by the union of the two OSV sets related to the individual WFs of the pair. In the periodic LMP2 method, the diagonal LMP2 amplitude matrices needed for the construction of the OSVs are calculated in the basis of projected atomic orbitals (PAOs), employing very large PAO domains. It turns out that the OSVs are excellent to describe short range correlation, yet less appropriate for long range van der Waals correlation. In order to compensate for this bias towards short range correlation, we augment the virtual space spanned by the OSVs by the most diffuse PAOs of the corresponding minimal PAO domain. The Fock and overlap matrices in OSV basis are constructed in the reciprocal space. The 4-index electron repulsion integrals are calculated by local density fitting and, for distant pairs, via multipole approximation. New procedures for determining the fit-domains and the distant-pair lists, leading to higher efficiency in the 4-index integral evaluation, have been implemented. Generally, and in contrast to our previous PAO based periodic LMP2 method, the OSV-LMP2 method does not require anymore great care in the specification of the individual domains (to get a balanced description when calculating energy differences) and is in that sense a black box procedure. Discontinuities in potential energy surfaces, which may occur for PAO-based calculations if one is not

  5. Periodic local MP2 method employing orbital specific virtuals

    SciTech Connect

    Usvyat, Denis Schütz, Martin; Maschio, Lorenzo

    2015-09-14

    We introduce orbital specific virtuals (OSVs) to represent the truncated pair-specific virtual space in periodic local Møller-Plesset perturbation theory of second order (LMP2). The OSVs are constructed by diagonalization of the LMP2 amplitude matrices which correspond to diagonal Wannier-function (WF) pairs. Only a subset of these OSVs is adopted for the subsequent OSV-LMP2 calculation, namely, those with largest contribution to the diagonal pair correlation energy and with the accumulated value of these contributions reaching a certain accuracy. The virtual space for a general (non diagonal) pair is spanned by the union of the two OSV sets related to the individual WFs of the pair. In the periodic LMP2 method, the diagonal LMP2 amplitude matrices needed for the construction of the OSVs are calculated in the basis of projected atomic orbitals (PAOs), employing very large PAO domains. It turns out that the OSVs are excellent to describe short range correlation, yet less appropriate for long range van der Waals correlation. In order to compensate for this bias towards short range correlation, we augment the virtual space spanned by the OSVs by the most diffuse PAOs of the corresponding minimal PAO domain. The Fock and overlap matrices in OSV basis are constructed in the reciprocal space. The 4-index electron repulsion integrals are calculated by local density fitting and, for distant pairs, via multipole approximation. New procedures for determining the fit-domains and the distant-pair lists, leading to higher efficiency in the 4-index integral evaluation, have been implemented. Generally, and in contrast to our previous PAO based periodic LMP2 method, the OSV-LMP2 method does not require anymore great care in the specification of the individual domains (to get a balanced description when calculating energy differences) and is in that sense a black box procedure. Discontinuities in potential energy surfaces, which may occur for PAO-based calculations if one is not

  6. [Contemporary methods of treatment in local advanced prostate cancer].

    PubMed

    Brzozowska, Anna; Mazurkiewicz, Maria; Starosławska, Elzbieta; Stasiewicz, Dominika; Mocarska, Agnieszka; Burdan, Franciszek

    2012-10-01

    The prostate cancer is one of the most often cancers amongst males. Its frequency is increasing with age. Thanks to widespread of screening denomination of specific prostate specific antigen (PSA), ultrasonography including the one in transrectal (TRUS), computed tomography, magnetic resonance and especially the awareness of society, the number of patients with low local advance of illness is increasing. The basic method of treatment in such cases is still the surgical removal of prostate with seminal bladder or radiotherapy. To this purpose tele-(IMRT, VMAT) or brachytherapy (J125, Ir192, Pa103) is used. In patients with higher risk of progression the radiotherapy may be associated with hormonotherapy (total androgen blockage-LH-RH analog and androgen). Despite numerous clinical researches conducted there is still no selection of optimal sequence of particular methods. Moreover, no explicit effectiveness was determined. The general rule of treatment in patients suffering from prostate cancer still remains individual selection of therapeutic treatment depending on the age of a patient, general condition and especially patient's general preferences. In case of elderly patients and patients with low risk of progression, recommendation of direct observation including systematical PSA denomination, clinical transrectal examination, TRUS, MR of smaller pelvis or scintigraphy of the whole skeleton may be considered.

  7. Comparing passive source localization and tracking approaches with a towed horizontal receiver array in an ocean waveguide.

    PubMed

    Gong, Zheng; Tran, Duong D; Ratilal, Purnima

    2013-11-01

    Approaches for instantaneous passive source localization using a towed horizontal receiver array in a random range-dependent ocean waveguide are examined. They include: (1) Moving array triangulation, (2) array invariant, (3) bearings-only target motion analysis in modified polar coordinates via the extended Kalman filter, and (4) bearings-migration minimum mean-square error. These methods are applied to localize and track a vertical source array deployed in the far-field of a towed horizontal receiver array during the Gulf of Maine 2006 Experiment. The source transmitted intermittent broadband pulses in the 300 to 1200 Hz frequency range. A nonlinear matched-filter kernel designed to replicate the acoustic signal measured by the receiver array is applied to enhance the signal-to-noise ratio. The source localization accuracy is found to be highly dependent on source-receiver geometry and the localization approach. For a relatively stationary source drifting at speeds much slower than the receiver array tow-speed, the mean source position can be estimated by moving array triangulation with less than 3% error near broadside direction. For a moving source, the Kalman filter method gives the best performance with 5.5% error. The array invariant is the best approach for localizing sources within the endfire beam of the receiver array with 7% error.

  8. Nonlinear optical methods for cellular imaging and localization.

    PubMed

    McVey, A; Crain, J

    2014-07-01

    Of all the ways in which complex materials (including many biological systems) can be explored, imaging is perhaps the most powerful because delivering high information content directly. This is particular relevant in aspects of cellular localization where the physical proximity of molecules is crucial in biochemical processes. A great deal of effort in imaging has been spent on enabling chemically selective imaging so that only specific features are revealed. This is almost always achieved by adding fluorescent chemical labels to specific molecules. Under appropriate illumination conditions only the molecules (via their labels) will be visible. The technique is simple and elegant but does suffer from fundamental limitations: (1) the fluorescent labels may fade when illuminated (a phenomenon called photobleaching) thereby constantly decreasing signal contrast over the course of image acquisition. To combat photobleaching one must reduce observation times or apply unfavourably low excitation levels all of which reduce the information content of images; (2) the fluorescent species may be deactivated by various environmental factors (the general term is fluorescence quenching); (3) the presence of fluorescent labels may introduce unexpected complications or may interfere with processes of interest (4) Some molecules of interest cannot be labelled. In these circumstances we require a fundamentally different strategy. One of the most promising alternative is based on a technique called Coherent Anti-Stokes Raman scattering (CARS). CARS is a fundamentally more complex process than is fluorescence and the experimental procedures and optical systems required to deliver high quality CARS images are intricate. However, the rewards are correspondingly very high: CARS probes the chemically distinct vibrations of the constituent molecules in a complex system and is therefore also chemically selective as are fluorescence-based methods. Moreover,the potentially severe problems of

  9. Qualitative Approaches to Mixed Methods Practice

    ERIC Educational Resources Information Center

    Hesse-Biber, Sharlene

    2010-01-01

    This article discusses how methodological practices can shape and limit how mixed methods is practiced and makes visible the current methodological assumptions embedded in mixed methods practice that can shut down a range of social inquiry. The article argues that there is a "methodological orthodoxy" in how mixed methods is practiced…

  10. In situ localization of epidermal stem cells using a novel multi epitope ligand cartography approach.

    PubMed

    Ruetze, Martin; Gallinat, Stefan; Wenck, Horst; Deppert, Wolfgang; Knott, Anja

    2010-06-01

    Precise knowledge of the frequency and localization of epidermal stem cells within skin tissue would further our understanding of their role in maintaining skin homeostasis. As a novel approach we used the recently developed method of multi epitope ligand cartography, applying a set of described putative epidermal stem cell markers. Bioinformatic evaluation of the data led to the identification of several discrete basal keratinocyte populations, but none of them displayed the complete stem cell marker set. The distribution of the keratinocyte populations within the tissue was remarkably heterogeneous, but determination of distance relationships revealed a population of quiescent cells highly expressing p63 and the integrins alpha(6)/beta(1) that represent origins of a gradual differentiation lineage. This population comprises about 6% of all basal cells, shows a scattered distribution pattern and could also be found in keratinocyte holoclone colonies. The data suggest that this population identifies interfollicular epidermal stem cells.

  11. Strategy for the Development of a DNB Local Predictive Approach Based on Neptune CFD Software

    SciTech Connect

    Haynes, Pierre-Antoine; Peturaud, Pierre; Montout, Michael; Hervieu, Eric

    2006-07-01

    The NEPTUNE project constitutes the thermal-hydraulics part of a long-term joint development program for the next generation of nuclear reactor simulation tools. This project is being carried through by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique), with the co-sponsorship of IRSN (Institut de Radioprotection et de Surete Nucleaire) and AREVA NP. NEPTUNE is a multi-phase flow software platform that includes advanced physical models and numerical methods for each simulation scale (CFD, component, system). NEPTUNE also provides new multi-scale and multi-disciplinary coupling functionalities. This new generation of two-phase flow simulation tools aims at meeting major industrial needs. DNB (Departure from Nucleate Boiling) prediction in PWRs is one of the high priority needs, and this paper focuses on its anticipated improvement by means of a so-called 'Local Predictive Approach' using the NEPTUNE CFD code. We firstly present the ambitious 'Local Predictive Approach' anticipated for a better prediction of DNB, i.e. an approach that intends to result in CHF correlations based on relevant local parameters as provided by the CFD modeling. The associated requirements for the two-phase flow modeling are underlined as well as those for the good level of performance of the NEPTUNE CFD code; hence, the code validation strategy based on different experimental data base types (including separated effect and integral-type tests data) is depicted. Secondly, we present comparisons between low pressure adiabatic bubbly flow experimental data obtained on the DEDALE experiment and the associated numerical simulation results. This study anew shows the high potential of NEPTUNE CFD code, even if, with respect to the aforementioned DNB-related aim, there is still a need for some modeling improvements involving new validation data obtained in thermal-hydraulics conditions representative of PWR ones. Finally, we deal with one of these new experimental data needs

  12. Jacobian-Based Iterative Method for Magnetic Localization in Robotic Capsule Endoscopy.

    PubMed

    Di Natali, Christian; Beccani, Marco; Simaan, Nabil; Valdastri, Pietro

    2016-04-01

    The purpose of this study is to validate a Jacobian-based iterative method for real-time localization of magnetically controlled endoscopic capsules. The proposed approach applies finite-element solutions to the magnetic field problem and least-squares interpolations to obtain closed-form and fast estimates of the magnetic field. By defining a closed-form expression for the Jacobian of the magnetic field relative to changes in the capsule pose, we are able to obtain an iterative localization at a faster computational time when compared with prior works, without suffering from the inaccuracies stemming from dipole assumptions. This new algorithm can be used in conjunction with an absolute localization technique that provides initialization values at a slower refresh rate. The proposed approach was assessed via simulation and experimental trials, adopting a wireless capsule equipped with a permanent magnet, six magnetic field sensors, and an inertial measurement unit. The overall refresh rate, including sensor data acquisition and wireless communication was 7 ms, thus enabling closed-loop control strategies for magnetic manipulation running faster than 100 Hz. The average localization error, expressed in cylindrical coordinates was below 7 mm in both the radial and axial components and 5° in the azimuthal component. The average error for the capsule orientation angles, obtained by fusing gyroscope and inclinometer measurements, was below 5°.

  13. Jacobian-Based Iterative Method for Magnetic Localization in Robotic Capsule Endoscopy

    PubMed Central

    Di Natali, Christian; Beccani, Marco; Simaan, Nabil; Valdastri, Pietro

    2016-01-01

    The purpose of this study is to validate a Jacobian-based iterative method for real-time localization of magnetically controlled endoscopic capsules. The proposed approach applies finite-element solutions to the magnetic field problem and least-squares interpolations to obtain closed-form and fast estimates of the magnetic field. By defining a closed-form expression for the Jacobian of the magnetic field relative to changes in the capsule pose, we are able to obtain an iterative localization at a faster computational time when compared with prior works, without suffering from the inaccuracies stemming from dipole assumptions. This new algorithm can be used in conjunction with an absolute localization technique that provides initialization values at a slower refresh rate. The proposed approach was assessed via simulation and experimental trials, adopting a wireless capsule equipped with a permanent magnet, six magnetic field sensors, and an inertial measurement unit. The overall refresh rate, including sensor data acquisition and wireless communication was 7 ms, thus enabling closed-loop control strategies for magnetic manipulation running faster than 100 Hz. The average localization error, expressed in cylindrical coordinates was below 7 mm in both the radial and axial components and 5° in the azimuthal component. The average error for the capsule orientation angles, obtained by fusing gyroscope and inclinometer measurements, was below 5°. PMID:27087799

  14. Training NOAA Staff on Effective Communication Methods with Local Climate Users

    NASA Astrophysics Data System (ADS)

    Timofeyeva, M. M.; Mayes, B.

    2011-12-01

    Since 2002 NOAA National Weather Service (NWS) Climate Services Division (CSD) offered training opportunities to NWS staff. As a result of eight-year-long development of the training program, NWS offers three training courses and about 25 online distance learning modules covering various climate topics: climate data and observations, climate variability and change, NWS national and local climate products, their tools, skill, and interpretation. Leveraging climate information and expertise available at all NOAA line offices and partners allows delivery of the most advanced knowledge and is a very critical aspect of the training program. NWS challenges in providing local climate services includes effective communication techniques on provide highly technical scientific information to local users. Addressing this challenge requires well trained, climate-literate workforce at local level capable of communicating the NOAA climate products and services as well as provide climate-sensitive decision support. Trained NWS climate service personnel use proactive and reactive approaches and professional education methods in communicating climate variability and change information to local users. Both scientifically-unimpaired messages and amiable communication techniques such as story telling approach are important in developing an engaged dialog between the climate service providers and users. Several pilot projects NWS CSD conducted in the past year applied the NWS climate services training program to training events for NOAA technical user groups. The technical user groups included natural resources managers, engineers, hydrologists, and planners for transportation infrastructure. Training of professional user groups required tailoring the instructions to the potential applications of each group of users. Training technical user identified the following critical issues: (1) Knowledge of target audience expectations, initial knowledge status, and potential use of climate

  15. OCT-based approach to local relaxations discrimination from translational relaxation motions

    NASA Astrophysics Data System (ADS)

    Matveev, Lev A.; Matveyev, Alexandr L.; Gubarkova, Ekaterina V.; Gelikonov, Grigory V.; Sirotkina, Marina A.; Kiseleva, Elena B.; Gelikonov, Valentin M.; Gladkova, Natalia D.; Vitkin, Alex; Zaitsev, Vladimir Y.

    2016-04-01

    Multimodal optical coherence tomography (OCT) is an emerging tool for tissue state characterization. Optical coherence elastography (OCE) is an approach to mapping mechanical properties of tissue based on OCT. One of challenging problems in OCE is elimination of the influence of residual local tissue relaxation that complicates obtaining information on elastic properties of the tissue. Alternatively, parameters of local relaxation itself can be used as an additional informative characteristic for distinguishing the tissue in normal and pathological states over the OCT image area. Here we briefly present an OCT-based approach to evaluation of local relaxation processes in the tissue bulk after sudden unloading of its initial pre-compression. For extracting the local relaxation rate we evaluate temporal dependence of local strains that are mapped using our recently developed hybrid phase resolved/displacement-tracking (HPRDT) approach. This approach allows one to subtract the contribution of global displacements of scatterers in OCT scans and separate the temporal evolution of local strains. Using a sample excised from of a coronary arteria, we demonstrate that the observed relaxation of local strains can be reasonably fitted by an exponential law, which opens the possibility to characterize the tissue by a single relaxation time. The estimated local relaxation times are assumed to be related to local biologically-relevant processes inside the tissue, such as diffusion, leaking/draining of the fluids, local folding/unfolding of the fibers, etc. In general, studies of evolution of such features can provide new metrics for biologically-relevant changes in tissue, e.g., in the problems of treatment monitoring.

  16. Approach of high density coal preparation method

    SciTech Connect

    Yang, Y.; Chen, Q.

    1996-12-31

    Density difference of aged anthracite coal of high density and discard is less than that of general coal and discard; conventional separation methods are difficult to be used. For the special coal, coal dry beneficiation technology with air-dense medium fluidized bed has obvious superiority over other separation methods.

  17. A generalized finite element method with global-local enrichment functions for confined plasticity problems

    NASA Astrophysics Data System (ADS)

    Kim, D.-J.; Duarte, C. A.; Proenca, S. P.

    2012-11-01

    The main feature of partition of unity methods such as the generalized or extended finite element method is their ability of utilizing a priori knowledge about the solution of a problem in the form of enrichment functions. However, analytical derivation of enrichment functions with good approximation properties is mostly limited to two-dimensional linear problems. This paper presents a procedure to numerically generate proper enrichment functions for three-dimensional problems with confined plasticity where plastic evolution is gradual. This procedure involves the solution of boundary value problems around local regions exhibiting nonlinear behavior and the enrichment of the global solution space with the local solutions through the partition of unity method framework. This approach can produce accurate nonlinear solutions with a reduced computational cost compared to standard finite element methods since computationally intensive nonlinear iterations can be performed on coarse global meshes after the creation of enrichment functions properly describing localized nonlinear behavior. Several three-dimensional nonlinear problems based on the rate-independent J 2 plasticity theory with isotropic hardening are solved using the proposed procedure to demonstrate its robustness, accuracy and computational efficiency.

  18. An MRI denoising method using image data redundancy and local SNR estimation.

    PubMed

    Golshan, Hosein M; Hasanzadeh, Reza P R; Yousefzadeh, Shahrokh C

    2013-09-01

    This paper presents an LMMSE-based method for the three-dimensional (3D) denoising of MR images assuming a Rician noise model. Conventionally, the LMMSE method estimates the noise-less signal values using the observed MR data samples within local neighborhoods. This is not an efficient procedure to deal with this issue while the 3D MR data intrinsically includes many similar samples that can be used to improve the estimation results. To overcome this problem, we model MR data as random fields and establish a principled way which is capable of choosing the samples not only from a local neighborhood but also from a large portion of the given data. To follow the similar samples within the MR data, an effective similarity measure based on the local statistical moments of images is presented. The parameters of the proposed filter are automatically chosen from the estimated local signal-to-noise ratio. To further enhance the denoising performance, a recursive version of the introduced approach is also addressed. The proposed filter is compared with related state-of-the-art filters using both synthetic and real MR datasets. The experimental results demonstrate the superior performance of our proposal in removing the noise and preserving the anatomical structures of MR images.

  19. A new heuristic method for approximating the number of local minima in partial RNA energy landscapes.

    PubMed

    Albrecht, Andreas A; Day, Luke; Abdelhadi Ep Souki, Ouala; Steinhöfel, Kathleen

    2016-02-01

    The analysis of energy landscapes plays an important role in mathematical modelling, simulation and optimisation. Among the main features of interest are the number and distribution of local minima within the energy landscape. Granier and Kallel proposed in 2002 a new sampling procedure for estimating the number of local minima. In the present paper, we focus on improved heuristic implementations of the general framework devised by Granier and Kallel with regard to run-time behaviour and accuracy of predictions. The new heuristic method is demonstrated for the case of partial energy landscapes induced by RNA secondary structures. While the computation of minimum free energy RNA secondary structures has been studied for a long time, the analysis of folding landscapes has gained momentum over the past years in the context of co-transcriptional folding and deeper insights into cell processes. The new approach has been applied to ten RNA instances of length between 99 nt and 504 nt and their respective partial energy landscapes defined by secondary structures within an energy offset ΔE above the minimum free energy conformation. The number of local minima within the partial energy landscapes ranges from 1440 to 3441. Our heuristic method produces for the best approximations on average a deviation below 3.0% from the true number of local minima.

  20. Systematic and general method for quantifying localization in microscopy images

    PubMed Central

    Sheng, Huanjie; Stauffer, Weston

    2016-01-01

    ABSTRACT Quantifying the localization of molecules with respect to other molecules, cell structures and intracellular regions is essential to understanding their regulation and actions. However, measuring localization from microscopy images is often difficult with existing metrics. Here, we evaluate a metric for quantifying localization termed the threshold overlap score (TOS), and show it is simple to calculate, easy to interpret, able to be used to systematically characterize localization patterns, and generally applicable. TOS is calculated by: (i) measuring the overlap of pixels that are above the intensity thresholds for two signals; (ii) determining whether the overlap is more, less, or the same as expected by chance, i.e. colocalization, anti-colocalization, or non-colocalization; and (iii) rescaling to allow comparison at different thresholds. The above is repeated at multiple threshold combinations to generate a TOS matrix to systematically characterize the relationship between localization and signal intensities. TOS matrices were used to identify and distinguish localization patterns of different proteins in various simulations, cell types and organisms with greater specificity and sensitivity than common metrics. For all the above reasons, TOS is an excellent first line metric, particularly for cells with mixed localization patterns. PMID:27979831

  1. A new approach for beam hardening correction based on the local spectrum distributions

    NASA Astrophysics Data System (ADS)

    Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza

    2015-09-01

    Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.

  2. The active titration method for measuring local hydroxyl radical concentration

    NASA Technical Reports Server (NTRS)

    Sprengnether, Michele; Prinn, Ronald G.

    1994-01-01

    We are developing a method for measuring ambient OH by monitoring its rate of reaction with a chemical species. Our technique involves the local, instantaneous release of a mixture of saturated cyclic hydrocarbons (titrants) and perfluorocarbons (dispersants). These species must not normally be present in ambient air above the part per trillion concentration. We then track the mixture downwind using a real-time portable ECD tracer instrument. We collect air samples in canisters every few minutes for roughly one hour. We then return to the laboratory and analyze our air samples to determine the ratios of the titrant to dispersant concentrations. The trends in these ratios give us the ambient OH concentration from the relation: dlnR/dt = -k(OH). A successful measurement of OH requires that the trends in these ratios be measureable. We must not perturb ambient OH concentrations. The titrant to dispersant ratio must be spatially invariant. Finally, heterogeneous reactions of our titrant and dispersant species must be negligible relative to the titrant reaction with OH. We have conducted laboratory studies of our ability to measure the titrant to dispersant ratios as a function of concentration down to the few part per trillion concentration. We have subsequently used these results in a gaussian puff model to estimate our expected uncertainty in a field measurement of OH. Our results indicate that under a range of atmospheric conditions we expect to be able to measure OH with a sensitivity of 3x10(exp 5) cm(exp -3). In our most optimistic scenarios, we obtain a sensitivity of 1x10(exp 5) cm(exp -3). These sensitivity values reflect our anticipated ability to measure the ratio trends. However, because we are also using a rate constant to obtain our (OH) from this ratio trend, our accuracy cannot be better than that of the rate constant, which we expect to be about 20 percent.

  3. Measuring Local Strain Rates In Ductile Shear Zones: A New Approach From Deformed Syntectonic Dykes

    NASA Astrophysics Data System (ADS)

    Sassier, C.; Leloup, P.; Rubatto, D.; Galland, O.; Yue, Y.; Ding, L.

    2006-12-01

    previous methods. From the less to the most deformed dykes, minimum γ values vary between 0.2 to ~10, respectively. Second, we determined the ages of emplacement of each dyke by ion microprobe U-Pb dating of monazites. We obtained three groups of ages at 22Ma, 24-26Ma and 30Ma. Our geochronological data are in good agreement with our structural data, the most deformed dykes being the oldest. The strain rates deduced from these measurements are on the order of 10^{-14}s-1, that is slower than values previously deduced from indirect methods. However, this value only corresponds to a minimum local strain rate. That new method developed to estimate local minimum strain rates in a major ductile shear zone seems to be reliable and could be applied to other shear zones. Such an approach applied at several locations along a single shear zone could also provide new opportunities to understand the evolution of a whole shear system.

  4. Slant-hole collimator, dual mode sterotactic localization method

    DOEpatents

    Weisenberger, Andrew G.

    2002-01-01

    The use of a slant-hole collimator in the gamma camera of dual mode stereotactic localization apparatus allows the acquisition of a stereo pair of scintimammographic images without repositioning of the gamma camera between image acquisitions.

  5. Groundwater abstraction management in Sana'a Basin, Yemen: a local community approach

    NASA Astrophysics Data System (ADS)

    Taher, Taha M.

    2016-09-01

    Overexploitation of groundwater resources in Sana'a Basin, Yemen, is causing severe water shortages associated water quality degradation. Groundwater abstraction is five times higher than natural recharge and the water-level decline is about 4-8 m/year. About 90 % of the groundwater resource is used for agricultural activities. The situation is further aggravated by the absence of a proper water-management approach for the Basin. Water scarcity in the Wadi As-Ssirr catchment, the study area, is the most severe and this area has the highest well density (average 6.8 wells/km2) compared with other wadi catchments. A local scheme of groundwater abstraction redistribution is proposed, involving the retirement of a substantial number of wells. The scheme encourages participation of the local community via collective actions to reduce the groundwater overexploitation, and ultimately leads to a locally acceptable, manageable groundwater abstraction pattern. The proposed method suggests using 587 wells rather than 1,359, thus reducing the well density to 2.9 wells/km2. Three scenarios are suggested, involving different reductions to the well yields and/or the number of pumping hours for both dry and wet seasons. The third scenario is selected as a first trial for the communities to action; the resulting predicted reduction, by 2,371,999 m3, is about 6 % of the estimated annual demand. Initially, the groundwater abstraction volume should not be changed significantly until there are protective measures in place, such as improved irrigation efficiency, with the aim of increasing the income of farmers and reducing water use.

  6. An Observationally-Centred Method to Quantify the Changing Shape of Local Temperature Distributions

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.

    2014-12-01

    For climate sensitive decisions and adaptation planning, guidance on how local climate is changing is needed at the specific thresholds relevant to particular impacts or policy endeavours. This requires the quantification of how the distributions of variables, such as daily temperature, are changing at specific quantiles. These temperature distributions are non-normal and vary both geographically and in time. We present a method[1,2] for analysing local climatic time series data to assess which quantiles of the local climatic distribution show the greatest and most robust changes. We have demonstrated this approach using the E-OBS gridded dataset[3] which consists of time series of local daily temperature across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. The change in temperature can be tracked at a temperature threshold, at a likelihood, or at a given return time, independently for each geographical location. Geographical correlations are thus an output of our method and reflect both climatic properties (local and synoptic), and spatial correlations inherent in the observation methodology. We find as an output many regionally consistent patterns of response of potential value in adaptation planning. For instance, in a band from Northern France to Denmark the hottest days in the summer temperature distribution have seen changes of at least 2°C over a 43 year period; over four times the global mean change over the same period. We discuss methods to quantify the robustness of these observed sensitivities and their statistical likelihood. This approach also quantifies the level of detail at which one might wish to see agreement between climate models and

  7. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

    PubMed Central

    Tuta, Jure; Juric, Matjaz B.

    2016-01-01

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453

  8. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  9. General Approach to Quantum Channel Impossibility by Local Operations and Classical Communication

    NASA Astrophysics Data System (ADS)

    Cohen, Scott M.

    2017-01-01

    We describe a general approach to proving the impossibility of implementing a quantum channel by local operations and classical communication (LOCC), even with an infinite number of rounds, and find that this can often be demonstrated by solving a set of linear equations. The method also allows one to design a LOCC protocol to implement the channel whenever such a protocol exists in any finite number of rounds. Perhaps surprisingly, the computational expense for analyzing LOCC channels is not much greater than that for LOCC measurements. We apply the method to several examples, two of which provide numerical evidence that the set of quantum channels that are not LOCC is not closed and that there exist channels that can be implemented by LOCC either in one round or in three rounds that are on the boundary of the set of all LOCC channels. Although every LOCC protocol must implement a separable quantum channel, it is a very difficult task to determine whether or not a given channel is separable. Fortunately, prior knowledge that the channel is separable is not required for application of our method.

  10. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  11. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  12. An ESPRIT-Based Approach for 2-D Localization of Incoherently Distributed Sources in Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Hu, Anzhong; Lv, Tiejun; Gao, Hui; Zhang, Zhang; Yang, Shaoshi

    2014-10-01

    In this paper, an approach of estimating signal parameters via rotational invariance technique (ESPRIT) is proposed for two-dimensional (2-D) localization of incoherently distributed (ID) sources in large-scale/massive multiple-input multiple-output (MIMO) systems. The traditional ESPRIT-based methods are valid only for one-dimensional (1-D) localization of the ID sources. By contrast, in the proposed approach the signal subspace is constructed for estimating the nominal azimuth and elevation direction-of-arrivals and the angular spreads. The proposed estimator enjoys closed-form expressions and hence it bypasses the searching over the entire feasible field. Therefore, it imposes significantly lower computational complexity than the conventional 2-D estimation approaches. Our analysis shows that the estimation performance of the proposed approach improves when the large-scale/massive MIMO systems are employed. The approximate Cram\\'{e}r-Rao bound of the proposed estimator for the 2-D localization is also derived. Numerical results demonstrate that albeit the proposed estimation method is comparable with the traditional 2-D estimators in terms of performance, it benefits from a remarkably lower computational complexity.

  13. Guided wave interaction with hole damage using the local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Obenchain, Matthew B.; Cesnik, Carlos E. S.

    2014-12-01

    This paper considers the effects of hole damage on guided wave propagation in isotropic and composite plates using both the local interaction simulation approach (LISA) and experimental methods. Guided wave generation from piezoceramic wafers is modeled using the recently developed LISA hybrid approach. First, holes in isotropic plates are simulated to establish LISA's ability to capture the guided wave scattering effects of various hole sizes. Experimental results are compared with the simulations to aid in evaluating the LISA model. Next, hole damage in cross-ply composite laminates is modeled and compared with experimental results. Various hole sizes and azimuthal locations are simulated to determine the effects of varying those parameters. Results from both the isotropic and composite damage studies clearly display the ability of LISA to model hole damage. Both the simulation and experimental results illustrate the advantages and disadvantages of various sensor locations relative to the actuator and damage locations. Finally, the study shows the ability of the LISA model to capture mode conversions resulting from partial thickness holes.

  14. DNA methods: critical review of innovative approaches.

    PubMed

    Kok, Esther J; Aarts, Henk J M; Van Hoef, A M Angeline; Kuiper, Harry A

    2002-01-01

    The presence of ingredients derived from genetically modified organisms (GMOs) in food products in the market place is subject to a number of European regulations that stipulate which product consisting of or containing GMO-derived ingredients should be labeled as such. In order to maintain these labeling requirements, a variety of different GMO detection methods have been developed to screen for either the presence of DNA or protein derived from (approved) GM varieties. Recent incidents where unapproved GM varieties entered the European market show that more powerful GMO detection and identification methods will be needed to maintain European labeling requirements in an adequate, efficient, and cost-effective way. This report discusses the current state-of-the-art as well as future developments in GMO detection.

  15. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  16. Localized operator partitioning method for electronic excitation energies in the time-dependent density functional formalism.

    PubMed

    Nagesh, Jayashree; Frisch, Michael J; Brumer, Paul; Izmaylov, Artur F

    2016-12-28

    We extend the localized operator partitioning method (LOPM) [J. Nagesh, A. F. Izmaylov, and P. Brumer, J. Chem. Phys. 142, 084114 (2015)] to the time-dependent density functional theory framework to partition molecular electronic energies of excited states in a rigorous manner. A molecular fragment is defined as a collection of atoms using Becke's atomic partitioning. A numerically efficient scheme for evaluating the fragment excitation energy is derived employing a resolution of the identity to preserve standard one- and two-electron integrals in the final expressions. The utility of this partitioning approach is demonstrated by examining several excited states of two bichromophoric compounds: 9-((1- naphthyl)- methyl)- anthracene and 4-((2- naphthyl)- methyl)- benzaldehyde. The LOPM is found to provide nontrivial insights into the nature of electronic energy localization that is not accessible using a simple density difference analysis.

  17. Finding fossils in new ways: an artificial neural network approach to predicting the location of productive fossil localities.

    PubMed

    Anemone, Robert; Emerson, Charles; Conroy, Glenn

    2011-01-01

    Chance and serendipity have long played a role in the location of productive fossil localities by vertebrate paleontologists and paleoanthropologists. We offer an alternative approach, informed by methods borrowed from the geographic information sciences and using recent advances in computer science, to more efficiently predict where fossil localities might be found. Our model uses an artificial neural network (ANN) that is trained to recognize the spectral characteristics of known productive localities and other land cover classes, such as forest, wetlands, and scrubland, within a study area based on the analysis of remotely sensed (RS) imagery. Using these spectral signatures, the model then classifies other pixels throughout the study area. The results of the neural network classification can be examined and further manipulated within a geographic information systems (GIS) software package. While we have developed and tested this model on fossil mammal localities in deposits of Paleocene and Eocene age in the Great Divide Basin of southwestern Wyoming, a similar analytical approach can be easily applied to fossil-bearing sedimentary deposits of any age in any part of the world. We suggest that new analytical tools and methods of the geographic sciences, including remote sensing and geographic information systems, are poised to greatly enrich paleoanthropological investigations, and that these new methods should be embraced by field workers in the search for, and geospatial analysis of, fossil primates and hominins.

  18. [Spiritual themes in mental pathology. Methodical approach].

    PubMed

    Marchais, P; Randrup, A

    1994-10-01

    The meaning of the themes with spiritual connotations poses complex problems for psychiatry, because these themes induce the observer to project his own convictions and frames of references on his investigations. A double detachment (objectivation) concerning both the object of study and the observer is implied. This makes it possible to study these phenomena by a more rigorous method, to investigate the conditions of their formation and to demonstrate objectifiable correlates (experienced space and time, the various levels of psychic experience, factors in the environment...). In consequence the appropriate medical behaviour can be more precisely delineated.

  19. Bridging multi-scale approach to consider the effects of local deformations in the analysis of thin-walled members

    NASA Astrophysics Data System (ADS)

    Erkmen, R. Emre

    2013-07-01

    Thin-walled members that have one dimension relatively large in comparison to the cross-sectional dimensions are usually modelled by using beam-type one-dimensional finite elements. Beam-type elements, however, are based on the assumption of rigid cross-section, thus they only allow considerations associated with the beam axis behaviour such as flexural-, torsional- or lateral-buckling and cannot consider the effects of local deformations such as flange local buckling or distortional buckling. In order to capture the local effects of this type shell-type finite element models can be used. Based on the Bridging multi-scale approach, this study proposes a numerical technique that is able to split the global analysis, which is performed by using simple beam-type elements, from the local analysis which is based on more sophisticated shell-type elements. As a result, the proposed multi-scale method allows the usage of shell elements in a local region to incorporate the local deformation effects on the overall behaviour of thin-walled members without necessitating a shell-type model for the whole member. Comparisons with full shell-type analysis are provided in order to illustrate the efficiency of the method developed herein.

  20. Green technology approach towards herbal extraction method

    NASA Astrophysics Data System (ADS)

    Mutalib, Tengku Nur Atiqah Tengku Ab; Hamzah, Zainab; Hashim, Othman; Mat, Hishamudin Che

    2015-05-01

    The aim of present study was to compare maceration method of selected herbs using green and non-green solvents. Water and d-limonene are a type of green solvents while non-green solvents are chloroform and ethanol. The selected herbs were Clinacanthus nutans leaf and stem, Orthosiphon stamineus leaf and stem, Sesbania grandiflora leaf, Pluchea indica leaf, Morinda citrifolia leaf and Citrus hystrix leaf. The extracts were compared with the determination of total phenolic content. Total phenols were analyzed using a spectrophotometric technique, based on Follin-ciocalteau reagent. Gallic acid was used as standard compound and the total phenols were expressed as mg/g gallic acid equivalent (GAE). The most suitable and effective solvent is water which produced highest total phenol contents compared to other solvents. Among the selected herbs, Orthosiphon stamineus leaves contain high total phenols at 9.087mg/g.

  1. A local pseudo arc-length method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Wang, Xing; Ma, Tian-Bao; Ren, Hui-Lan; Ning, Jian-Guo

    2014-12-01

    A local pseudo arc-length method (LPALM) for solving hyperbolic conservation laws is presented in this paper. The key idea of this method comes from the original arc-length method, through which the critical points are bypassed by transforming the computational space. The method is based on local changes of physical variables to choose the discontinuous stencil and introduce the pseudo arc-length parameter, and then transform the governing equations from physical space to arc-length space. In order to solve these equations in arc-length coordinate, it is necessary to combine the velocity of mesh points in the moving mesh method, and then convert the physical variable in arclength space back to physical space. Numerical examples have proved the effectiveness and generality of the new approach for linear equation, nonlinear equation and system of equations with discontinuous initial values. Non-oscillation solution can be obtained by adjusting the parameter and the mesh refinement number for problems containing both shock and rarefaction waves.

  2. Improved variation calling via an iterative backbone remapping and local assembly method for bacterial genomes

    PubMed Central

    Tae, Hongseok; Settlage, Robert E.; Shallom, Shamira; Bavarva, Jasmin H.; Preston, Dale; Hawkins, Gregory N.; Adams, L. Garry; Garner, Harold R.

    2012-01-01

    Sequencing data analysis remains limiting and problematic, especially for low complexity repeat sequences and transposon elements due to inherent sequencing errors and short sequence read lengths. We have developed a program, ReviSeq, which uses a hybrid method comprised of iterative remapping and local assembly upon a bacterial sequence backbone. Application of this method to six Brucella suis field isolates compared to the newly revised Brucella suis 1330 reference genome identified on average 13, 15, 19 and 9 more variants per sample than STAMPY/SAMtools, BWA/SAMtools, iCORN and BWA/PINDEL pipelines, and excluded on average 4, 2, 3 and 19 variants per sample, respectively. In total, using this iterative approach, we identified on average 87 variants including SNVs, short INDELs and long INDELs per strain when compared to the reference. Our program outperforms other methods especially for long INDEL calling. The program is available at http://reviseq.sourceforge.net. PMID:22967795

  3. Communication: Improved pair approximations in local coupled-cluster methods

    NASA Astrophysics Data System (ADS)

    Schwilk, Max; Usvyat, Denis; Werner, Hans-Joachim

    2015-03-01

    In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.

  4. Locally conservative groundwater flow in the continuous Galerkin method using 3-D prismatic patches

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Zhao, Yingwang; Lin, Yu-Feng F.; Xu, Hua

    2016-11-01

    A new procedure has been developed to improve the velocity field computed by the continuous Galerkin finite element method (CG). It enables extending the postprocessing algorithm proposed by Cordes and Kinzelbach (1992) to three-dimensional (3-D) models by using prismatic patches for saturated groundwater flow. This approach leverages a dual mesh to preserve local mass conservation and provides interpolated velocities based on consistent fluxes. To develop this 3-D approach, a triangular conservative patch is introduced by computing not only advection fluxes, but also vertical infiltrations, storage changes, and other sink or source terms. This triangular patch is then used to develop a prismatic patch, which consists of subprisms in two layers. By dividing a single two-layer patch into two separate one-layer patches, two dimensional (2-D) algorithms can be applied to compute velocities. As a consequence, each subelement is able to preserve local mass conservation. A hypothetical 3-D model is used to evaluate the precision of streamlines and flow rates generated by this approach and the FEFLOW simulation program.

  5. Communication: Multipole approximations of distant pair energies in local correlation methods with pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Werner, Hans-Joachim

    2016-11-01

    The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.

  6. Communication: Multipole approximations of distant pair energies in local correlation methods with pair natural orbitals.

    PubMed

    Werner, Hans-Joachim

    2016-11-28

    The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.

  7. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities.

    PubMed

    Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P

    2015-09-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

  8. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities

    PubMed Central

    Green, Carla A.; Duan, Naihua; Gibbons, Robert D.; Hoagwood, Kimberly E.; Palinkas, Lawrence A.; Wisdom, Jennifer P.

    2015-01-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings. PMID:24722814

  9. Modeling of nonlinear interactions between guided waves and fatigue cracks using local interaction simulation approach.

    PubMed

    Shen, Yanfeng; Cesnik, Carlos E S

    2017-02-01

    This article presents a parallel algorithm to model the nonlinear dynamic interactions between ultrasonic guided waves and fatigue cracks. The Local Interaction Simulation Approach (LISA) is further developed to capture the contact-impact clapping phenomena during the wave crack interactions based on the penalty method. Initial opening and closure distributions are considered to approximate the 3-D rough crack microscopic features. A Coulomb friction model is integrated to capture the stick-slip contact motions between the crack surfaces. The LISA procedure is parallelized via the Compute Unified Device Architecture (CUDA), which enables parallel computing on powerful graphic cards. The explicit contact formulation, the parallel algorithm, as well as the GPU-based implementation facilitate LISA's high computational efficiency over the conventional finite element method (FEM). This article starts with the theoretical formulation and numerical implementation of the proposed algorithm, followed by the solution behavior study and numerical verification against a commercial finite element code. Numerical case studies are conducted on Lamb wave interactions with fatigue cracks. Several nonlinear ultrasonic phenomena are addressed. The classical nonlinear higher harmonic and DC response are successfully captured. The nonlinear mode conversion at a through-thickness and a half-thickness fatigue crack is investigated. Threshold behaviors, induced by initial openings and closures of rough crack surfaces, are depicted by the proposed contact LISA model.

  10. Novel fluorescence-based approaches for the study of biogenic amine transporter localization, activity, and regulation.

    PubMed

    Mason, J N; Farmer, H; Tomlinson, I D; Schwartz, J W; Savchenko, V; DeFelice, L J; Rosenthal, S J; Blakely, R D

    2005-04-15

    Pre-synaptic norepinephrine (NE) and dopamine (DA) transporters (NET and DAT) terminate catecholamine synaptic transmission through reuptake of released neurotransmitter. Recent studies reveal that NET and DAT are tightly regulated by receptor and second messenger-linked signaling pathways. Common approaches for studying these transporters involve use of radiolabeled substrates or antagonists, methods possessing limited spatial resolution and that bear limited opportunities for repeated monitoring of living preparations. To circumvent these issues, we have explored two novel assay platforms that permit temporally resolved quantitation of transport activity and transporter protein localization. To monitor the binding and transport function of NET and DAT in real-time, we have investigated the uptake of the fluorescent organic compound 4-(4-diethylaminostyryl)-N-methylpyridinium iodide (ASP+). We have extended our previous single cell level application of this substrate to monitor transport activity via high-throughput assay platforms. Compared to radiotracer uptake methods, acquisition of ASP+ fluorescence is non-isotopic and allows for continuous, repeated transport measurements on both transfected and native preparations. Secondly, we have extended our application of small-molecule-conjugated fluorescent CdSe/ZnS nanocrystals, or quantum dots (Qdots), to utilize antibody and peptide ligands that can identify surface expressed transporters, receptors and other membrane proteins in living cell systems. Unlike typical organic fluorophores, Qdots are highly resistant to bleaching and can be conjugated to multiple ligands. They can also be illuminated by conventional light sources, yet produce narrow, gaussian emission spectra compatible with multiple target visualization (multiplexing). Together, these approaches offer novel opportunities to investigate changes in transporter function and distribution in real-time with superior spatial and temporal resolution.

  11. Locating Damage Using Integrated Global-Local Approach with Wireless Sensing System and Single-Chip Impedance Measurement Device

    PubMed Central

    Hung, Shih-Lin

    2014-01-01

    This study developed an integrated global-local approach for locating damage on building structures. A damage detection approach with a novel embedded frequency response function damage index (NEFDI) was proposed and embedded in the Imote2.NET-based wireless structural health monitoring (SHM) system to locate global damage. Local damage is then identified using an electromechanical impedance- (EMI-) based damage detection method. The electromechanical impedance was measured using a single-chip impedance measurement device which has the advantages of small size, low cost, and portability. The feasibility of the proposed damage detection scheme was studied with reference to a numerical example of a six-storey shear plane frame structure and a small-scale experimental steel frame. Numerical and experimental analysis using the integrated global-local SHM approach reveals that, after NEFDI indicates the approximate location of a damaged area, the EMI-based damage detection approach can then identify the detailed damage location in the structure of the building. PMID:24672359

  12. The role of local observations as evidence to inform effective mitigation methods for flood risk management

    NASA Astrophysics Data System (ADS)

    Quinn, Paul; ODonnell, Greg; Owen, Gareth

    2014-05-01

    This poster presents a case study that highlights two crucial aspects of a catchment-based flood management project that were used to encourage uptake of an effective flood management strategy. Specifically, (1) the role of detailed local scale observations and (2) a modelling method informed by these observations. Within a 6km2 study catchment, Belford UK, a number of Runoff Attenuation Features (RAFs) have been constructed (including ponds, wetlands and woody debris structures) to address flooding issues in the downstream village. The storage capacity of the RAFs is typically small (200 to 500m3), hence there was skepticism as to whether they would work during large flood events. Monitoring was performed using a dense network of water level recorders installed both within the RAFs and within the stream network. Using adjacent upstream and downstream water levels in the stream network and observations within the actual ponds, a detailed understanding of the local performance of the RAFs was gained. However, despite understanding the local impacts of the features, the impact on the downstream hydrograph at the catchment scale could still not be ascertained with any certainty. The local observations revealed that the RAFs typically filled on the rising limb of the hydrograph; hence there was no available storage at the time of arrival of a large flow peak. However, it was also clear that an impact on the rising limb of the hydrograph was being observed. This knowledge of the functioning of individual features was used to create a catchment model, in which a network of RAFs could then be configured to examine the aggregated impacts. This Pond Network Model (PNM) was based on the observed local physical relationships and allowed a user specified sequence of ponds to be configured into a cascade structure. It was found that there was a minimum number of RAFs needed before an impact on peak flow was achieved for a large flood event. The number of RAFs required in the

  13. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

  14. Iterative normalization method for improved prostate cancer localization with multispectral magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Samil Yetik, Imam

    2012-04-01

    Use of multispectral magnetic resonance imaging has received a great interest for prostate cancer localization in research and clinical studies. Manual extraction of prostate tumors from multispectral magnetic resonance imaging is inefficient and subjective, while automated segmentation is objective and reproducible. For supervised, automated segmentation approaches, learning is essential to obtain the information from training dataset. However, in this procedure, all patients are assumed to have similar properties for the tumor and normal tissues, and the segmentation performance suffers since the variations across patients are ignored. To conquer this difficulty, we propose a new iterative normalization method based on relative intensity values of tumor and normal tissues to normalize multispectral magnetic resonance images and improve segmentation performance. The idea of relative intensity mimics the manual segmentation performed by human readers, who compare the contrast between regions without knowing the actual intensity values. We compare the segmentation performance of the proposed method with that of z-score normalization followed by support vector machine, local active contours, and fuzzy Markov random field. Our experimental results demonstrate that our method outperforms the three other state-of-the-art algorithms, and was found to have specificity of 0.73, sensitivity of 0.69, and accuracy of 0.79, significantly better than alternative methods.

  15. DYNAMO: concurrent dynamic multi-model source localization method for EEG and/or MEG.

    PubMed

    Antelis, Javier M; Minguez, Javier

    2013-01-15

    This work presents a new dipolar method to estimate the neural sources from separate or combined EEG and MEG data. The novelty lies in the simultaneous estimation and integration of neural sources from different dynamic models with different parameters, leading to a dynamic multi-model solution for the EEG/MEG source localization problem. The first key aspect of this method is defining the source model as a dipolar dynamic system, which allows for the estimation of the probability distribution of the sources within the Bayesian filter estimation framework. A second important aspect is the consideration of several banks of filters that simultaneously estimate and integrate the neural sources of different models. A third relevant aspect is that the final probability estimate is a result of the probabilistic integration of the neural sources of numerous models. Such characteristics lead to a new approach that does not require a prior definition neither of the number of sources or of the underlying temporal dynamics, allowing for the specification of multiple initial prior estimates. The method was validated by three sensor modalities with simulated data designed to impose difficult estimation situations, and with real EEG data recorded in a feedback error-related potential paradigm. On the basis of these evaluations, the method was able to localize the sources with high accuracy.

  16. Damage localization in a residential-sized wind turbine blade by use of the SDDLV method

    NASA Astrophysics Data System (ADS)

    Johansen, R. J.; Hansen, L. M.; Ulriksen, M. D.; Tcherniak, D.; Damkilde, L.

    2015-07-01

    The stochastic dynamic damage location vector (SDDLV) method has previously proved to facilitate effective damage localization in truss- and plate-like structures. The method is based on interrogating damage-induced changes in transfer function matrices in cases where these matrices cannot be derived explicitly due to unknown input. Instead, vectors from the kernel of the transfer function matrix change are utilized; vectors which are derived on the basis of the system and state-to-output mapping matrices from output-only state-space realizations. The idea is then to convert the kernel vectors associated with the lowest singular values into static pseudo-loads and apply these alternately to an undamaged reference model with known stiffness matrix. By doing so, the stresses in the potentially damaged elements will, theoretically, approach zero. The present paper demonstrates an application of the SDDLV method for localization of structural damages in a cantilevered residential-sized wind turbine blade. The blade was excited by an unmeasured multi-impulse load and the resulting dynamic response was captured through accelerometers mounted along the blade. The static pseudo-loads were applied to a finite element (FE) blade model, which was tuned against the modal parameters of the actual blade. In the experiments, an undamaged blade configuration was analysed along with different damage scenarios, hereby testing the applicability of the SDDLV method.

  17. A Discourse Based Approach to the Language Documentation of Local Ecological Knowledge

    ERIC Educational Resources Information Center

    Odango, Emerson Lopez

    2016-01-01

    This paper proposes a discourse-based approach to the language documentation of local ecological knowledge (LEK). The knowledge, skills, beliefs, cultural worldviews, and ideologies that shape the way a community interacts with its environment can be examined through the discourse in which LEK emerges. 'Discourse-based' refers to two components:…

  18. The Diverse Forms of Tech-Prep: Implementation Approaches in Ten Local Consortia.

    ERIC Educational Resources Information Center

    Hershey, Alan; And Others

    This document profiles the diverse approaches to tech-prep taken by 10 local districts across the United States. The tech-prep programs in the following cities are profiled: Dayton, Ohio; Dothan, Alabama; East Peoria, Illinois; Fresno, California; Gainesville, Florida; Hartford, Connecticut; Logan, West Virginia; Salem, Oregon; Springdale,…

  19. New hidden beauty molecules predicted by the local hidden gauge approach and heavy quark spin symmetry

    NASA Astrophysics Data System (ADS)

    Xiao, C. W.; Ozpineci, A.; Oset, E.

    2015-10-01

    Using a coupled channel unitary approach, combining the heavy quark spin symmetry and the dynamics of the local hidden gauge, we investigate the meson-meson interaction with hidden beauty. We obtain several new states of isospin I = 0: six bound states, and weakly bound six more possible states which depend on the influence of the coupled channel effects.

  20. International Students' Motivation and Learning Approach: A Comparison with Local Students

    ERIC Educational Resources Information Center

    Chue, Kah Loong; Nie, Youyan

    2016-01-01

    Psychological factors contribute to motivation and learning for international students as much as teaching strategies. 254 international students and 144 local students enrolled in a private education institute were surveyed regarding their perception of psychological needs support, their motivation and learning approach. The results from this…

  1. Locally Conservative, Stabilized Finite Element Methods for Variably Saturated Flow

    DTIC Science & Technology

    2007-11-06

    mixed methods for Richards’ equation. The effectiveness of the multiscale stabilization strategy varied somewhat. For a steady-state, variably...Arbogast, Z. Chen, On the implementation of mixed methods as non- conforming methods for second order elliptic problems, Mathematics of Computation 64...211) (1995) 943–972. [53] Z. Chen, Equivalence between and multigrid algorithms for nonconform- ing and mixed methods for second order elliptic

  2. Water-sanitation-hygiene mapping: an improved approach for data collection at local level.

    PubMed

    Giné-Garriga, Ricard; de Palencia, Alejandro Jiménez-Fernández; Pérez-Foguet, Agustí

    2013-10-01

    Strategic planning and appropriate development and management of water and sanitation services are strongly supported by accurate and accessible data. If adequately exploited, these data might assist water managers with performance monitoring, benchmarking comparisons, policy progress evaluation, resources allocation, and decision making. A variety of tools and techniques are in place to collect such information. However, some methodological weaknesses arise when developing an instrument for routine data collection, particularly at local level: i) comparability problems due to heterogeneity of indicators, ii) poor reliability of collected data, iii) inadequate combination of different information sources, and iv) statistical validity of produced estimates when disaggregated into small geographic subareas. This study proposes an improved approach for water, sanitation and hygiene (WASH) data collection at decentralised level in low income settings, as an attempt to overcome previous shortcomings. The ultimate aim is to provide local policymakers with strong evidences to inform their planning decisions. The survey design takes the Water Point Mapping (WPM) as a starting point to record all available water sources at a particular location. This information is then linked to data produced by a household survey. Different survey instruments are implemented to collect reliable data by employing a variety of techniques, such as structured questionnaires, direct observation and water quality testing. The collected data is finally validated through simple statistical analysis, which in turn produces valuable outputs that might feed into the decision-making process. In order to demonstrate the applicability of the method, outcomes produced from three different case studies (Homa Bay District-Kenya-; Kibondo District-Tanzania-; and Municipality of Manhiça-Mozambique-) are presented.

  3. Grid-Search Location Methods for Ground-Truth Collection From Local and Regional Seismic Networks

    SciTech Connect

    William Rodi; Craig A. Schultz; Gardar Johannesson; Stephen C. Myers

    2005-05-13

    This project investigated new techniques for improving seismic event locations derived from regional and local networks. The technqiues include a new approach to empirical travel-time calibration that simultaneously fits data from multiple stations and events, using a generalization of the kriging method, and predicts travel-time corrections for arbitrary event-station paths. We combined this calibration approach with grid-search event location to produce a prototype new multiple-event location method that allows the use of spatially well-distributed events and takes into account correlations between the travel-time corrections from proximate event-station paths. Preliminary tests with a high quality data set from Nevada Test Site explosions indicated that our new calibration/location method offers improvement over the conventional multiple-event location methods now in common use, and is applicable to more general event-station geometries than the conventional methods. The tests were limited, however, and further research is needed to fully evaluate, and improve, the approach. Our project also demonstrated the importance of using a realistic model for observational errors in an event location procedure. We took the initial steps in developing a new error model based on mixture-of-Gaussians probability distributions, which possess the properties necessary to characterize the complex arrival time error processes that can occur when picking low signal-to-noise arrivals. We investigated various inference methods for fitting these distributions to observed travel-time residuals, including a Markov Chain Monte Carlo technique for computing Bayesian estimates of the distribution parameters.

  4. A method to compute treatment suggestions from local order entry data.

    PubMed

    Klann, Jeffrey; Schadow, Gunther; Downs, Stephen M

    2010-11-13

    Although clinical decision support systems can reduce costs and improve care, the challenges associated with manually maintaining content has led to low utilization. Here we pilot an alternative, more automatic approach to decision support content generation. We use local order entry data and Bayesian networks to automatically find multivariate associations and suggest treatments. We evaluated this on 5044 hospitalizations of pregnant women, choosing 70 frequent order and treatment variables comprising 20 treatable conditions. The method produced treatment suggestion lists for 15 of these conditions. The lists captured accurate and non-trivial clinical knowledge, and all contained the key treatment for the condition, often as the first suggestion (71% overall, 90% non-labor-related). Additionally, when run on a test set of patient data, it very accurately predicted treatments (average AUC .873) and predicted pregnancy-specific treatments with even higher accuracy (AUC above .9). This method is a starting point for harnessing the wisdom-of-the-crowd for decision support.

  5. A local crack-tracking strategy to model three-dimensional crack propagation with embedded methods

    SciTech Connect

    Annavarapu, Chandrasekhar; Settgast, Randolph R.; Vitali, Efrem; Morris, Joseph P.

    2016-09-29

    We develop a local, implicit crack tracking approach to propagate embedded failure surfaces in three-dimensions. We build on the global crack-tracking strategy of Oliver et al. (Int J. Numer. Anal. Meth. Geomech., 2004; 28:609–632) that tracks all potential failure surfaces in a problem at once by solving a Laplace equation with anisotropic conductivity. We discuss important modifications to this algorithm with a particular emphasis on the effect of the Dirichlet boundary conditions for the Laplace equation on the resultant crack path. Algorithmic and implementational details of the proposed method are provided. Finally, several three-dimensional benchmark problems are studied and results are compared with available literature. Lastly, the results indicate that the proposed method addresses pathological cases, exhibits better behavior in the presence of closely interacting fractures, and provides a viable strategy to robustly evolve embedded failure surfaces in 3D.

  6. A local crack-tracking strategy to model three-dimensional crack propagation with embedded methods

    DOE PAGES

    Annavarapu, Chandrasekhar; Settgast, Randolph R.; Vitali, Efrem; ...

    2016-09-29

    We develop a local, implicit crack tracking approach to propagate embedded failure surfaces in three-dimensions. We build on the global crack-tracking strategy of Oliver et al. (Int J. Numer. Anal. Meth. Geomech., 2004; 28:609–632) that tracks all potential failure surfaces in a problem at once by solving a Laplace equation with anisotropic conductivity. We discuss important modifications to this algorithm with a particular emphasis on the effect of the Dirichlet boundary conditions for the Laplace equation on the resultant crack path. Algorithmic and implementational details of the proposed method are provided. Finally, several three-dimensional benchmark problems are studied and resultsmore » are compared with available literature. Lastly, the results indicate that the proposed method addresses pathological cases, exhibits better behavior in the presence of closely interacting fractures, and provides a viable strategy to robustly evolve embedded failure surfaces in 3D.« less

  7. A locally implicit method for fluid flow problems

    NASA Technical Reports Server (NTRS)

    Reddy, K. C.

    1986-01-01

    The fluid flow inside the space shuttle main engine (SSME) traverses through a complex geometrical configuration. The flow is compressible, viscous, and turbulent with pockets of separated regions. Several computer codes are being developed to solve three dimensional Navier-Stokes equations with different turbulence models for analyzing the SSME internal flow. The locally implicit scheme is a computationally efficient scheme which converges rapidly in multi-grid modes for elliptic problems. It has the promise of providing a rapidly converging algorithm for steady-state viscous flow problems.

  8. A Local Parabolic Method for Long Distance Wave Propagation

    DTIC Science & Technology

    2005-09-21

    centroid motion and total integrated amplitude at each point along the pulse surface. The main issue in computing these cases is that conventional...This is necessary because the Lattice Confinement terms should not depend on the scale of the quantity being confined. Another important point is that...4,VD2 2+ E5,VD xb where b is a local harmonic mean of 3 at each grid point : B~f FI N’ where 6, and VD denote discrete operators and Eqn. 1.2 was

  9. A novel evolutionary approach to image enhancement filter design: method and applications.

    PubMed

    Hong, Jin-Hyuk; Cho, Sung-Bae; Cho, Ung-Keun

    2009-12-01

    Image enhancement is an important issue in digital image processing. Various approaches have been developed to solve image enhancement problems, but most of them require deep expert knowledge to design appropriate image filters. To automatically design a filter, we propose a novel approach based on the genetic algorithm that optimizes a set of standard filters by determining their types and order. Moreover, the proposed method is able to manage various types of noise factors. We applied the proposed method to local and global image enhancement problems such as impulsive noise reduction, interpolation, and orientation enhancement. In terms of subjective and objective evaluations, the results show the superiority of the proposed method.

  10. An Improved Local Search Learning Method for Multiple-Valued Logic Network Minimization with Bi-objectives

    NASA Astrophysics Data System (ADS)

    Gao, Shangce; Cao, Qiping; Vairappan, Catherine; Zhang, Jianchen; Tang, Zheng

    This paper describes an improved local search method for synthesizing arbitrary Multiple-Valued Logic (MVL) function. In our approach, the MVL function is mapped from its algebraic presentation (sum-of-products form) on a multiple-layered network based on the functional completeness property. The output of the network is evaluated based on two metrics of correctness and optimality. A local search embedded with chaotic dynamics is utilized to train the network in order to minimize the MVL functions. With the characteristics of pseudo-randomness, ergodicity and irregularity, both the search sequence and solution neighbourhood generated by chaotic variables enables the system to avoid local minimum settling and improves the solution quality. Simulation results based on 2-variable 4-valued MVL functions and some other large instances also show that the improved local search learning algorithm outperforms the traditional methods in terms of the correctness and the average number of product terms required to realize a given MVL function.

  11. An observationally centred method to quantify local climate change as a distribution

    NASA Astrophysics Data System (ADS)

    Stainforth, David; Chapman, Sandra; Watkins, Nicholas

    2013-04-01

    For planning and adaptation, guidance on trends in local climate is needed at the specific thresholds relevant to particular impact or policy endeavours. This requires quantifying trends at specific quantiles in distributions of variables such as daily temperature or precipitation. These non-normal distributions vary both geographically and in time. The trends in the relevant quantiles may not simply follow the trend in the distribution mean. We present a method[1] for analysing local climatic timeseries data to assess which quantiles of the local climatic distribution show the greatest and most robust trends. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily temperature from specific locations across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of the sensitivity of different quantiles of the distributions to changing climate. Geographical location and temperature are treated as independent variables, we thus obtain as outputs how the trend or sensitivity varies with temperature (or occurrence likelihood), and with geographical location. These sensitivities are found to be geographically varying across Europe; as one would expect given the different influences on local climate between, say, Western Scotland and central Italy. We find as an output many regionally consistent patterns of response of potential value in adaptation planning. We discuss methods to quantify the robustness of these observed sensitivities and their statistical likelihood. This also quantifies the level of detail needed from climate models if they are to be used as tools to assess climate change impact. [1] S C

  12. A localized re-initialization equation for the conservative level set method

    NASA Astrophysics Data System (ADS)

    McCaslin, Jeremy O.; Desjardins, Olivier

    2014-04-01

    The conservative level set methodology for interface transport is modified to allow for localized level set re-initialization. This approach is suitable to applications in which there is a significant amount of spatial variability in level set transport. The steady-state solution of the modified re-initialization equation matches that of the original conservative level set provided an additional Eikonal equation is solved, which can be done efficiently through a fast marching method (FMM). Implemented within the context of the accurate conservative level set method (ACLS) (Desjardins et al., 2008, [6]), the FMM solution of this Eikonal equation comes at no additional cost. A metric for the appropriate amount of local re-initialization is proposed based on estimates of local flow deformation and numerical diffusion. The method is compared to standard global re-initialization for two test cases, yielding the expected results that minor differences are observed for Zalesak's disk, and improvements in both mass conservation and interface topology are seen for a drop deforming in a vortex. Finally, the method is applied to simulation of a viscously damped standing wave and a three-dimensional drop impacting on a shallow pool. Negligible differences are observed for the standing wave, as expected. For the last case, results suggest that spatially varying re-initialization provides a reduction in spurious interfacial corrugations, improvements in the prediction of radial growth of the splashing lamella, and a reduction in conservation errors, as well as a reduction in overall computational cost that comes from improved conditioning of the pressure Poisson equation due to the removal of spurious corrugations.

  13. Novel approach for talc pleurodesis by dedicated catheter through flexi-rigid thoracoscope under local anesthesia.

    PubMed

    Ishida, Atsuko; Nakamura, Miho; Miyazawa, Teruomi; Astoul, Philippe

    2011-05-01

    For pleurodesis, talc administered by poudrage is usually insufflated blindly from a single port of entry using the standard method with a small-diameter rigid thoracoscope. In order to visually perform talc poudrage from a single port, we introduced a catheter technique through a flexi-rigid thoracoscope. Patients with uncontrolled and symptomatic pleural effusion requiring pleurodesis underwent flexi-rigid thoracoscopy under local anesthesia for talc poudrage. A dedicated catheter with 2.1-mm inner diameter was connected to a talc atomizer and inserted through the working channel of the flexi-rigid thoracoscope to insufflate talc into the pleural cavity under visualization. Nine patients were included in this study. Three patients were >75 years old, and two were Karnofsky performance status 50. Three patients received propofol for sedation and six were not sedated. Mean operative time was 30.8 min for all patients, and 21.3 min for cases without sedation. All procedures were performed easily under clear visualization with no major complications or catheter obstructions. This novel approach for talc pleurodesis using a catheter was well-tolerated and seems feasible for patients with uncontrolled pleural effusion. We consider this technique useful even for difficult cases, such as elderly patients or those with relatively low performance status.

  14. Typicality approach to the optical conductivity in thermal and many-body localized phases

    NASA Astrophysics Data System (ADS)

    Steinigeweg, Robin; Herbrych, Jacek; Pollmann, Frank; Brenig, Wolfram

    2016-11-01

    We study the frequency dependence of the optical conductivity Reσ (ω ) of the Heisenberg spin-1/2 chain in the thermal and near the transition to the many-body localized phase induced by the strength of a random z -directed magnetic field. Using the method of dynamical quantum typicality, we calculate the real-time dynamics of the spin-current autocorrelation function and obtain the Fourier transform Reσ (ω ) for system sizes much larger than accessible to standard exact-diagonalization approaches. We find that the low-frequency behavior of Reσ (ω ) is well described by Reσ (ω ) ≈σdc+a |ω| α , with α ≈1 in a wide range within the thermal phase and close to the transition. We particularly detail the decrease of σdc in the thermal phase as a function of increasing disorder for strong exchange anisotropies. We further find that the temperature dependence of σdc is consistent with the existence of a mobility edge.

  15. Simplified approaches to some nonoverlapping domain decomposition methods

    SciTech Connect

    Xu, Jinchao

    1996-12-31

    An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.

  16. Locally-calibrated light transmission visualization methods to quantify nonaqueous phase liquid mass in porous media

    NASA Astrophysics Data System (ADS)

    Wang, Huaguo; Chen, Xiaosong; Jawitz, James W.

    2008-11-01

    Five locally-calibrated light transmission visualization (LTV) methods were tested to quantify nonaqueous phase liquid (NAPL) mass and mass reduction in porous media. Tetrachloroethylene (PCE) was released into a two-dimensional laboratory flow chamber packed with water-saturated sand which was then flushed with a surfactant solution (2% Tween 80) until all of the PCE had been dissolved. In all the LTV methods employed here, the water phase was dyed, rather than the more common approach of dyeing the NAPL phase, such that the light adsorption characteristics of NAPL did not change as dissolution progressed. Also, none of the methods used here required the use of external calibration chambers. The five visualization approaches evaluated included three methods developed from previously published models, a binary method, and a novel multiple wavelength method that has the advantage of not requiring any assumptions about the intra-pore interface structure between the various phases (sand/water/NAPL). The new multiple wavelength method is also expected to be applicable to any translucent porous media containing two immiscible fluids (e.g., water-air, water-NAPL). Results from the sand-water-PCE system evaluated here showed that the model that assumes wetting media of uniform pore size (Model C of Niemet and Selker, 2001) and the multiple wavelength model with no interface structure assumptions were able to accurately quantify PCE mass reduction during surfactant flushing. The average mass recoveries from these two imaging methods were greater than 95% for domain-average NAPL saturations of approximately 2.6 × 10 - 2 , and were approximately 90% during seven cycles of surfactant flushing that sequentially reduced the average NAPL saturation to 7.5 × 10 - 4 .

  17. Local Hamiltonians for quantitative Green's function embedding methods

    NASA Astrophysics Data System (ADS)

    Rusakov, Alexander A.; Phillips, Jordan J.; Zgid, Dominika

    2014-11-01

    Embedding calculations that find approximate solutions to the Schrödinger equation for large molecules and realistic solids are performed commonly in a three step procedure involving (i) construction of a model system with effective interactions approximating the low energy physics of the initial realistic system, (ii) mapping the model system onto an impurity Hamiltonian, and (iii) solving the impurity problem. We have developed a novel procedure for parametrizing the impurity Hamiltonian that avoids the mathematically uncontrolled step of constructing the low energy model system. Instead, the impurity Hamiltonian is immediately parametrized to recover the self-energy of the realistic system in the limit of high frequencies or short time. The effective interactions parametrizing the fictitious impurity Hamiltonian are local to the embedded regions, and include all the non-local interactions present in the original realistic Hamiltonian in an implicit way. We show that this impurity Hamiltonian can lead to excellent total energies and self-energies that approximate the quantities of the initial realistic system very well. Moreover, we show that as long as the effective impurity Hamiltonian parametrization is designed to recover the self-energy of the initial realistic system for high frequencies, we can expect a good total energy and self-energy. Finally, we propose two practical ways of evaluating effective integrals for parametrizing impurity models.

  18. Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration

    PubMed Central

    Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu

    2016-01-01

    One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity. PMID:27271636

  19. Practical approaches for assessing local land use change and conservation priorities in the tropics

    NASA Astrophysics Data System (ADS)

    Rivas, Cassandra J.

    Tropical areas typically support high biological diversity; however, many are experiencing rapid land-use change. The resulting loss, fragmentation, and degradation of habitats place biodiversity at risk. For these reasons, the tropics are frequently identified as global conservation hotspots. Safeguarding tropical biodiversity necessitates successful and efficient conservation planning and implementation at local scales, where land use decisions are made and enforced. Yet, despite considerable agreement on the need for improved practices, planning may be difficult due to limited resources, such as funding, data, and expertise, especially for small conservation organizations in tropical developing countries. My thesis aims to assist small, non-governmental organizations (NGOs), operating in tropical developing countries, in overcoming resource limitations by providing recommendations for improved conservation planning. Following a brief introduction in Chapter 1, I present a literature review of systematic conservation planning (SCP) projects in the developing tropics. Although SCP is considered an efficient, effective approach, it requires substantial data and expertise to conduct the analysis and may present challenges for implementation. I reviewed and synthesized the methods and results of 14 case studies to identify practical ways to implement and overcome limitations for employing SCP. I found that SCP studies in the peer-reviewed literature were primarily implemented by researchers in large organizations or institutions, as opposed to on-the-ground conservation planners. A variety of data types were used in the SCP analyses, many of which data are freely available. Few case studies involved stakeholders and intended to implement the assessment; instead, the case studies were carried out in the context of research and development, limiting local involvement and implementation. Nonetheless, the studies provided valuable strategies for employing each step of

  20. How Nectar-Feeding Bats Localize their Food: Echolocation Behavior of Leptonycteris yerbabuenae Approaching Cactus Flowers

    PubMed Central

    Koblitz, Jens C.; Fleming, Theodore H.; Medellín, Rodrigo A.; Kalko, Elisabeth K. V.; Schnitzler, Hans-Ulrich; Tschapka, Marco

    2016-01-01

    Nectar-feeding bats show morphological, physiological, and behavioral adaptations for feeding on nectar. How they find and localize flowers is still poorly understood. While scent cues alone allow no precise localization of a floral target, the spatial properties of flower echoes are very precise and could play a major role, particularly at close range. The aim of this study is to understand the role of echolocation for classification and localization of flowers. We compared the approach behavior of Leptonycteris yerbabuenae to flowers of a columnar cactus, Pachycereus pringlei, to that to an acrylic hollow hemisphere that is acoustically conspicuous to bats, but has different acoustic properties and, contrary to the cactus flower, present no scent. For recording the flight and echolocation behaviour we used two infrared video cameras under stroboscopic illumination synchronized with ultrasound recordings. During search flights all individuals identified both targets as a possible food source and initiated an approach flight; however, they visited only the cactus flower. In experiments with the acrylic hemisphere bats aborted the approach at ca. 40–50 cm. In the last instant before the flower visit the bats emitted a long terminal group of 10–20 calls. This is the first report of this behaviour for a nectar-feeding bat. Our findings suggest that L. yerbabuenae use echolocation for classification and localization of cactus flowers and that the echo-acoustic characteristics of the flower guide the bats directly to the flower opening. PMID:27684373

  1. A hybrid passive localization method under strong interference with a preliminary experimental demonstration

    NASA Astrophysics Data System (ADS)

    Lei, Bo; Yang, Yixin; Yang, Kunde; Wang, Yong; Shi, Yang

    2016-12-01

    Strong interference exists in many passive localization problems and may lead to the inefficacy of traditional localization methods. In this study, a hybrid passive localization method is proposed to address strong interference. This method combines generalized cross-correlation and interference cancellation for time-difference-of-arrival (TDOA) measurement, followed by a time-delay-based iterative localization method. The proposed method is applied to a preliminary experiment using three hydrophones. The TDOAs estimated by the proposed method are compared with those obtained by the particle filtering method. Results show that the positions are in agreement when the TDOAs are accurately obtained. Furthermore, the proposed method is more capable of localization in the presence of a strong moving jamming source.

  2. An Evaluation of the New Approach Method--Final Report.

    ERIC Educational Resources Information Center

    Powers, Donald E.

    The New Approach Method (NAM) is an innovative reading program relying heavily on a phonics approach. The mode of presentation is a cassette tape recorder, which the child is taught to operate at the beginning of the program. The NAM lessons were administered to children at four NAM mini centers; a group of parents administered the NAM lessons to…

  3. A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization

    PubMed Central

    Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao

    2016-01-01

    The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322

  4. An improved method for Daugman's iris localization algorithm.

    PubMed

    Ren, Xinying; Peng, Zhiyong; Zeng, Qingning; Peng, Chaonan; Zhang, Jianhua; Wu, Shuicai; Zeng, Yanjun

    2008-01-01

    Computer-based automatic recognition of persons for security reasons is highly desirable. Iris patterns provide an opportunity for separation of individuals to an extent that would avoid false positives and negatives. The current standard for this science is Daugman's iris localization algorithm. Part of the time required for analysis and comparison with other images relates to eyelid and eyelash positioning and length. We sought to remove the upper and lower eyelids and eyelashes to determine if separation of individuals could still be attained. Our experiments suggest separation can be achieved as effectively and more quickly by removing distracting and variable features while retaining enough stable factors in the iris to enable accurate identification.

  5. 3D handheld laser scanner based approach for automatic identification and localization of EEG sensors.

    PubMed

    Koessler, Laurent; Cecchin, Thierry; Ternisien, Eric; Maillard, Louis

    2010-01-01

    This paper describes and assesses for the first time the use of a handheld 3D laser scanner for scalp EEG sensor localization and co-registration with magnetic resonance images. Study on five subjects showed that the scanner had an equivalent accuracy, a better repeatability, and was faster than the reference electromagnetic digitizer. According to electrical source imaging, somatosensory evoked potentials experiments validated its ability to give precise sensor localization. With our automatic labeling method, the data provided by the scanner could be directly introduced in the source localization studies.

  6. Method for optical coherence tomography image classification using local features and earth mover's distance

    NASA Astrophysics Data System (ADS)

    Sun, Yankui; Lei, Ming

    2009-09-01

    Optical coherence tomography (OCT) is a recent imaging method that allows high-resolution, cross-sectional imaging through tissues and materials. Over the past 18 years, OCT has been successfully used in disease diagnosis, biomedical research, material evaluation, and many other domains. As OCT is a recent imaging method, until now surgeons have limited experience using it. In addition, the number of images obtained from the imaging device is too large, so we need an automated method to analyze them. We propose a novel method for automated classification of OCT images based on local features and earth mover's distance (EMD). We evaluated our algorithm using an OCT image set which contains two kinds of skin images, normal skin and nevus flammeus. Experimental results demonstrate the effectiveness of our method, which achieved classification accuracy of 0.97 for an EMD+KNN scheme and 0.99 for an EMD+SVM (support vector machine) scheme, much higher than the previous method. Our approach is especially suitable for nonhomogeneous images and could be applied to a wide range of OCT images.

  7. Localized surface plasmon resonance mercury detection system and methods

    DOEpatents

    James, Jay; Lucas, Donald; Crosby, Jeffrey Scott; Koshland, Catherine P.

    2016-03-22

    A mercury detection system that includes a flow cell having a mercury sensor, a light source and a light detector is provided. The mercury sensor includes a transparent substrate and a submonolayer of mercury absorbing nanoparticles, e.g., gold nanoparticles, on a surface of the substrate. Methods of determining whether mercury is present in a sample using the mercury sensors are also provided. The subject mercury detection systems and methods find use in a variety of different applications, including mercury detecting applications.

  8. Stable and accurate difference methods for seismic wave propagation on locally refined meshes

    NASA Astrophysics Data System (ADS)

    Petersson, A.; Rodgers, A.; Nilsson, S.; Sjogreen, B.; McCandless, K.

    2006-12-01

    To overcome some of the shortcomings of previous numerical methods for the elastic wave equation subject to stress-free boundary conditions, we are incorporating recent results from numerical analysis to develop a new finite difference method which discretizes the governing equations in second order displacement formulation. The most challenging aspect of finite difference methods for time dependent hyperbolic problems is clearly stability and some previous methods are known to be unstable when the material has a compressional velocity which exceeds about three times the shear velocity. Since the material properties in seismic applications often vary rapidly on the computational grid, the most straight forward approach for guaranteeing stability is through an energy estimate. For a hyperbolic system in second order formulation, the key to an energy estimate is a spatial discretization which is self-adjoint, i.e. corresponds to a symmetric or symmetrizable matrix. At the same time we want the scheme to be efficient and fully explicit, so only local operations are necessary to evolve the solution in the interior of the domain as well as on the free-surface boundary. Furthermore, we want the solution to be accurate when the data is smooth. Using these specifications, we developed an explicit second order accurate discretization where stability is guaranteed through an energy estimate for all ratios Cp/Cs. An implementation of our finite difference method was used to simulate ground motions during the 1906 San Francisco earthquake on a uniform grid with grid sizes down to 100 meters corresponding to over 4 Billion grid points. These simulations were run on 1024 processors on one of the supercomputers at Lawrence Livermore National Lab. To reduce the computational requirements for these simulations, we are currently extending the numerical method to use a locally refined mesh where the mesh size approximately follows the velocity structure in the domain. Some

  9. Grid-Search Location Methods for Ground-Truth Collection from Local and Regional Seismic Networks

    SciTech Connect

    Schultz, C A; Rodi, W; Myers, S C

    2003-07-24

    The objective of this project is to develop improved seismic event location techniques that can be used to generate more and better quality reference events using data from local and regional seismic networks. Their approach is to extend existing methods of multiple-event location with more general models of the errors affecting seismic arrival time data, including picking errors and errors in model-based travel-times (path corrections). Toward this end, they are integrating a grid-search based algorithm for multiple-event location (GMEL) with a new parameterization of travel-time corrections and new kriging method for estimating the correction parameters from observed travel-time residuals. Like several other multiple-event location algorithms, GMEL currently assumes event-independent path corrections and is thus restricted to small event clusters. The new parameterization assumes that travel-time corrections are a function of both the event and station location, and builds in source-receiver reciprocity and correlation between the corrections from proximate paths as constraints. The new kriging method simultaneously interpolates travel-time residuals from multiple stations and events to estimate the correction parameters as functions of position. They are currently developing the algorithmic extensions to GMEL needed to combine the new parameterization and kriging method with the simultaneous location of events. The result will be a multiple-event location method which is applicable to non-clustered, spatially well-distributed events. They are applying the existing components of the new multiple-event location method to a data set of regional and local arrival times from Nevada Test Site (NTS) explosions with known origin parameters. Preliminary results show the feasibility and potential benefits of combining the location and kriging techniques. They also show some preliminary work on generalizing of the error model used in GMEL with the use of mixture

  10. An efficient implementation of the localized operator partitioning method for electronic energy transfer

    SciTech Connect

    Nagesh, Jayashree; Brumer, Paul; Izmaylov, Artur F.

    2015-02-28

    The localized operator partitioning method [Y. Khan and P. Brumer, J. Chem. Phys. 137, 194112 (2012)] rigorously defines the electronic energy on any subsystem within a molecule and gives a precise meaning to the subsystem ground and excited electronic energies, which is crucial for investigating electronic energy transfer from first principles. However, an efficient implementation of this approach has been hindered by complicated one- and two-electron integrals arising in its formulation. Using a resolution of the identity in the definition of partitioning, we reformulate the method in a computationally efficient manner that involves standard one- and two-electron integrals. We apply the developed algorithm to the 9 − ((1 − naphthyl) − methyl) − anthracene (A1N) molecule by partitioning A1N into anthracenyl and CH{sub 2} − naphthyl groups as subsystems and examine their electronic energies and populations for several excited states using configuration interaction singles method. The implemented approach shows a wide variety of different behaviors amongst the excited electronic states.

  11. A novel approach to decoy set generation: designing a physical energy function having local minima with native structure characteristics.

    PubMed

    Keasar, Chen; Levitt, Michael

    2003-05-23

    We suggest a new approach to the generation of candidate structures (decoys) for ab initio prediction of protein structures. Our method is based on random sampling of conformation space and subsequent local energy minimization. At the core of this approach lies the design of a novel type of energy function. This energy function has local minima with native structure characteristics and wide basins of attraction. The current work presents our motivation for deriving such an energy function and also tests the derived energy function. Our approach is novel in that it takes advantage of the inherently rough energy landscape of proteins, which is generally considered a major obstacle for protein structure prediction. When local minima have wide basins of attraction, the protein's conformation space can be greatly reduced by the convergence of large regions of the space into single points, namely the local minima corresponding to these funnels. We have implemented this concept by an iterative process. The potential is first used to generate decoy sets and then we study these sets of decoys to guide further development of the potential. A key feature of our potential is the use of cooperative multi-body interactions that mimic the role of the entropic and solvent contributions to the free energy. The validity and value of our approach is demonstrated by applying it to 14 diverse, small proteins. We show that, for these proteins, the size of conformation space is considerably reduced by the new energy function. In fact, the reduction is so substantial as to allow efficient conformational sampling. As a result we are able to find a significant number of near-native conformations in random searches performed with limited computational resources.

  12. An integrated lean-methods approach to hospital facilities redesign.

    PubMed

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  13. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

    PubMed Central

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is

  14. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-07-22

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is

  15. Percutaneous Irreversible Electroporation of Locally Advanced Pancreatic Carcinoma Using the Dorsal Approach: A Case Report

    SciTech Connect

    Scheffer, Hester J. Melenhorst, Marleen C. A. M.; Vogel, Jantien A.; Tilborg, Aukje A. J. M. van; Nielsen, Karin Kazemier, Geert; Meijerink, Martijn R.

    2015-06-15

    Irreversible electroporation (IRE) is a novel image-guided ablation technique that is increasingly used to treat locally advanced pancreatic carcinoma (LAPC). We describe a 67-year-old male patient with a 5 cm stage III pancreatic tumor who was referred for IRE. Because the ventral approach for electrode placement was considered dangerous due to vicinity of the tumor to collateral vessels and duodenum, the dorsal approach was chosen. Under CT-guidance, six electrodes were advanced in the tumor, approaching paravertebrally alongside the aorta and inferior vena cava. Ablation was performed without complications. This case describes that when ventral electrode placement for pancreatic IRE is impaired, the dorsal approach could be considered alternatively.

  16. A Cross-Layer User Centric Vertical Handover Decision Approach Based on MIH Local Triggers

    NASA Astrophysics Data System (ADS)

    Rehan, Maaz; Yousaf, Muhammad; Qayyum, Amir; Malik, Shahzad

    Vertical handover decision algorithm that is based on user preferences and coupled with Media Independent Handover (MIH) local triggers have not been explored much in the literature. We have developed a comprehensive cross-layer solution, called Vertical Handover Decision (VHOD) approach, which consists of three parts viz. mechanism for collecting and storing user preferences, Vertical Handover Decision (VHOD) algorithm and the MIH Function (MIHF). MIHF triggers the VHOD algorithm which operates on user preferences to issue handover commands to mobility management protocol. VHOD algorithm is an MIH User and therefore needs to subscribe events and configure thresholds for receiving triggers from MIHF. In this regard, we have performed experiments in WLAN to suggest thresholds for Link Going Down trigger. We have also critically evaluated the handover decision process, proposed Just-in-time interface activation technique, compared our proposed approach with prominent user centric approaches and analyzed our approach from different aspects.

  17. The local maxima method for enhancement of time-frequency map and its application to local damage detection in rotating machines

    NASA Astrophysics Data System (ADS)

    Obuchowski, Jakub; Wyłomańska, Agnieszka; Zimroz, Radosław

    2014-06-01

    In this paper a new method of fault detection in rotating machinery is presented. It is based on a vibration time series analysis in time-frequency domain. A raw vibration signal is decomposed via the short-time Fourier transform (STFT). The time-frequency map is considered as matrix (M×N) with N sub-signals with length M. Each sub-signal is considered as a time series and might be interpreted as energy variation for narrow frequency bins. Each sub-signal is processed using a novel approach called the local maxima method. Basically, we search for local maxima because they should appear in the signal if local damage in bearings or gearbox exists. Finally, information for all sub-signals is combined in order to validate impulsive behavior of energy. Due to random character of the obtained time series, each maximum occurrence has to be checked for its significance. If there are time points for which the average number of local maxima for all sub-signals is significantly higher than for the other time instances, then location of these maxima is “weighted” as more important (at this time instance local maxima create for a set of Δf a pattern on the time-frequency map). This information, called vector of weights, is used for enhancement of spectrogram. When vector of weights is applied for spectrogram, non-informative energy is suppressed while informative features on spectrogram are enhanced. If the distribution of local maxima on spectrogram creates a pattern of wide-band cyclic energy growth, the machine is suspected of being damaged. For healthy condition, the vector of the average number of maxima for each time point should not have outliers, aggregation of information from all sub-signals is rather random and does not create any pattern. The method is illustrated by analysis of very noisy both real and simulated signals.

  18. An Ensemble Method for Predicting Subnuclear Localizations from Primary Protein Structures

    PubMed Central

    Han, Guo Sheng; Yu, Zu Guo; Anh, Vo; Krishnajith, Anaththa P. D.; Tian, Yu-Chu

    2013-01-01

    Background Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. Methodology/Principal Findings A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. Conclusions It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method. It is freely available

  19. Localized dynamic light scattering: a new approach to dynamic measurements in optical microscopy.

    PubMed

    Meller, A; Bar-Ziv, R; Tlusty, T; Moses, E; Stavans, J; Safran, S A

    1998-03-01

    We present a new approach to probing single-particle dynamics that uses dynamic light scattering from a localized region. By scattering a focused laser beam from a micron-size particle, we measure its spatial fluctuations via the temporal autocorrelation of the scattered intensity. We demonstrate the applicability of this approach by measuring the three-dimensional force constants of a single bead and a pair of beads trapped by laser tweezers. The scattering equations that relate the scattered intensity autocorrelation to the particle position correlation function are derived. This technique has potential applications for measurement of biomolecular force constants and probing viscoelastic properties of complex media.

  20. Gaussian Process Regression Plus Method for Localization Reliability Improvement.

    PubMed

    Liu, Kehan; Meng, Zhaopeng; Own, Chung-Ming

    2016-07-29

    Location data are among the most widely used context data in context-aware and ubiquitous computing applications. Many systems with distinct deployment costs and positioning accuracies have been developed over the past decade for indoor positioning. The most useful method is focused on the received signal strength and provides a set of signal transmission access points. However, compiling a manual measuring Received Signal Strength (RSS) fingerprint database involves high costs and thus is impractical in an online prediction environment. The system used in this study relied on the Gaussian process method, which is a nonparametric model that can be characterized completely by using the mean function and the covariance matrix. In addition, the Naive Bayes method was used to verify and simplify the computation of precise predictions. The authors conducted several experiments on simulated and real environments at Tianjin University. The experiments examined distinct data size, different kernels, and accuracy. The results showed that the proposed method not only can retain positioning accuracy but also can save computation time in location predictions.

  1. Development of acoustic sniper localization methods and models

    NASA Astrophysics Data System (ADS)

    Grasing, David; Ellwood, Benjamin

    2010-04-01

    A novel examination of a method capable of providing situational awareness of sniper fire from small arms fire is presented. Situational Awareness (SA) information is extracted by exploiting two distinct sounds created by small arms discharge: the muzzle blast (created when the bullet leaves the barrel of the gun) and the shockwave (sound created by a supersonic bullet). The direction of arrival associated with the muzzle blast will always point in the direction of the shooter. Range can be estimated from the muzzle blast alone, however at greater distances geometric dilution of precision will make obtaining accurate range estimates difficult. To address this issue, additional information obtained from the shockwave is utilized in order to estimate range to shooter. The focus of the paper is the development of a shockwave propagation model, the development of ballistics models (based off empirical measurements), and the subsequent application towards methods of determining shooter position. Knowledge of the rounds ballistics is required to estimate range to shooter. Many existing methods rely on extracting information from the shockwave in an attempt to identify the round type and thus the ballistic model to use ([1]). It has been our experience that this information becomes unreliable at greater distances or in high noise environments. Our method differs from existing solutions in that classification of the round type is not required, thus making the proposed solution more robust. Additionally, we demonstrate that sufficient accuracy can be achieved without the need to classify the round.

  2. Gaussian Process Regression Plus Method for Localization Reliability Improvement

    PubMed Central

    Liu, Kehan; Meng, Zhaopeng; Own, Chung-Ming

    2016-01-01

    Location data are among the most widely used context data in context-aware and ubiquitous computing applications. Many systems with distinct deployment costs and positioning accuracies have been developed over the past decade for indoor positioning. The most useful method is focused on the received signal strength and provides a set of signal transmission access points. However, compiling a manual measuring Received Signal Strength (RSS) fingerprint database involves high costs and thus is impractical in an online prediction environment. The system used in this study relied on the Gaussian process method, which is a nonparametric model that can be characterized completely by using the mean function and the covariance matrix. In addition, the Naive Bayes method was used to verify and simplify the computation of precise predictions. The authors conducted several experiments on simulated and real environments at Tianjin University. The experiments examined distinct data size, different kernels, and accuracy. The results showed that the proposed method not only can retain positioning accuracy but also can save computation time in location predictions. PMID:27483276

  3. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  4. Flow equation approach to one-body and many-body localization

    NASA Astrophysics Data System (ADS)

    Quito, Victor; Bhattacharjee, Paraj; Pekker, David; Refael, Gil

    2014-03-01

    We study one-body and many-body localization using the flow equation technique applied to spin-1/2 Hamiltonians. This technique, first introduced by Wegner, allows us to exact diagonalize interacting systems by solving a set of first-order differential equations for coupling constants. Besides, by the flow of individual operators we also compute physical properties, such as correlation and localization lengths, by looking at the flow of probability distributions of couplings in the Hilbert space. As a first example, we analyze the one-body localization problem written in terms of spins, the disordered XY model with a random transverse field. We compare the results obtained in the flow equation approach with the diagonalization in the fermionic language. For the many-body problem, we investigate the physical properties of the disordered XXZ Hamiltonian with a random transverse field in the z-direction.

  5. The Local Integrity Approach for Urban Contexts: Definition and Vehicular Experimental Assessment.

    PubMed

    Margaria, Davide; Falletti, Emanuela

    2016-01-26

    A novel cooperative integrity monitoring concept, called "local integrity", suitable to automotive applications in urban scenarios, is discussed in this paper. The idea is to take advantage of a collaborative Vehicular Ad hoc NETwork (VANET) architecture in order to perform a spatial/temporal characterization of possible degradations of Global Navigation Satellite System (GNSS) signals. Such characterization enables the computation of the so-called "Local Protection Levels", taking into account local impairments to the received signals. Starting from theoretical concepts, this paper describes the experimental validation by means of a measurement campaign and the real-time implementation of the algorithm on a vehicular prototype. A live demonstration in a real scenario has been successfully carried out, highlighting effectiveness and performance of the proposed approach.

  6. A transfer matrix approach to vibration localization in mistuned blade assemblies

    NASA Technical Reports Server (NTRS)

    Ottarson, Gisli; Pierre, Chritophe

    1993-01-01

    A study of mode localization in mistuned bladed disks is performed using transfer matrices. The transfer matrix approach yields the free response of a general, mono-coupled, perfectly cyclic assembly in closed form. A mistuned structure is represented by random transfer matrices, and the expansion of these matrices in terms of the small mistuning parameter leads to the definition of a measure of sensitivity to mistuning. An approximation of the localization factor, the spatially averaged rate of exponential attenuation per blade-disk sector, is obtained through perturbation techniques in the limits of high and low sensitivity. The methodology is applied to a common model of a bladed disk and the results verified by Monte Carlo simulations. The easily calculated sensitivity measure may prove to be a valuable design tool due to its system-independent quantification of mistuning effects such as mode localization.

  7. The Local Integrity Approach for Urban Contexts: Definition and Vehicular Experimental Assessment

    PubMed Central

    Margaria, Davide; Falletti, Emanuela

    2016-01-01

    A novel cooperative integrity monitoring concept, called “local integrity”, suitable to automotive applications in urban scenarios, is discussed in this paper. The idea is to take advantage of a collaborative Vehicular Ad hoc NETwork (VANET) architecture in order to perform a spatial/temporal characterization of possible degradations of Global Navigation Satellite System (GNSS) signals. Such characterization enables the computation of the so-called “Local Protection Levels”, taking into account local impairments to the received signals. Starting from theoretical concepts, this paper describes the experimental validation by means of a measurement campaign and the real-time implementation of the algorithm on a vehicular prototype. A live demonstration in a real scenario has been successfully carried out, highlighting effectiveness and performance of the proposed approach. PMID:26821028

  8. Total System Performance Assessment - License Application Methods and Approach

    SciTech Connect

    J. McNeish

    2003-12-08

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document.

  9. Total System Performance Assessment-License Application Methods and Approach

    SciTech Connect

    J. McNeish

    2002-09-13

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.

  10. A RF time domain approach for electric arcs detection and localization systems

    NASA Astrophysics Data System (ADS)

    Deacu, Daniela; Tamas, Razvan; Petrescu, Teodor; Paun, Mirel; Anchidin, Liliana; Algiu, Madalina

    2016-12-01

    In this paper we propose a new method for detection and localization of electric arcs by using two ultra-wide band (UWB) antennas together with data processing in the time-domain. The source of electric arcs is localized by computing an average on the inter-correlation functions of the signals received on two channels. By calculating the path length difference to the antennas, the direction of the electric arcs is then found. The novelty of the method consists in the spatial averaging in order to reduce the incertitude caused by the finite sampling rate.

  11. Explaining Andean Potato Weevils in Relation to Local and Landscape Features: A Facilitated Ecoinformatics Approach

    PubMed Central

    Parsa, Soroush; Ccanto, Raúl; Olivera, Edgar; Scurrah, María; Alcázar, Jesús; Rosenheim, Jay A.

    2012-01-01

    Background Pest impact on an agricultural field is jointly influenced by local and landscape features. Rarely, however, are these features studied together. The present study applies a “facilitated ecoinformatics” approach to jointly screen many local and landscape features of suspected importance to Andean potato weevils (Premnotrypes spp.), the most serious pests of potatoes in the high Andes. Methodology/Principal Findings We generated a comprehensive list of predictors of weevil damage, including both local and landscape features deemed important by farmers and researchers. To test their importance, we assembled an observational dataset measuring these features across 138 randomly-selected potato fields in Huancavelica, Peru. Data for local features were generated primarily by participating farmers who were trained to maintain records of their management operations. An information theoretic approach to modeling the data resulted in 131,071 models, the best of which explained 40.2–46.4% of the observed variance in infestations. The best model considering both local and landscape features strongly outperformed the best models considering them in isolation. Multi-model inferences confirmed many, but not all of the expected patterns, and suggested gaps in local knowledge for Andean potato weevils. The most important predictors were the field's perimeter-to-area ratio, the number of nearby potato storage units, the amount of potatoes planted in close proximity to the field, and the number of insecticide treatments made early in the season. Conclusions/Significance Results underscored the need to refine the timing of insecticide applications and to explore adjustments in potato hilling as potential control tactics for Andean weevils. We believe our study illustrates the potential of ecoinformatics research to help streamline IPM learning in agricultural learning collaboratives. PMID:22693551

  12. Analysis on accuracy improvement of rotor-stator rubbing localization based on acoustic emission beamforming method.

    PubMed

    He, Tian; Xiao, Denghong; Pan, Qiang; Liu, Xiandong; Shan, Yingchun

    2014-01-01

    This paper attempts to introduce an improved acoustic emission (AE) beamforming method to localize rotor-stator rubbing fault in rotating machinery. To investigate the propagation characteristics of acoustic emission signals in casing shell plate of rotating machinery, the plate wave theory is used in a thin plate. A simulation is conducted and its result shows the localization accuracy of beamforming depends on multi-mode, dispersion, velocity and array dimension. In order to reduce the effect of propagation characteristics on the source localization, an AE signal pre-process method is introduced by combining plate wave theory and wavelet packet transform. And the revised localization velocity to reduce effect of array size is presented. The accuracy of rubbing localization based on beamforming and the improved method of present paper are compared by the rubbing test carried on a test table of rotating machinery. The results indicate that the improved method can localize rub fault effectively.

  13. Local search methods based on variable focusing for random K -satisfiability

    NASA Astrophysics Data System (ADS)

    Lemoy, Rémi; Alava, Mikko; Aurell, Erik

    2015-01-01

    We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed.

  14. Local treatment of electron excitations in the EOM-CCSD method

    NASA Astrophysics Data System (ADS)

    Korona, Tatiana; Werner, Hans-Joachim

    2003-02-01

    The Equation-of-Motion coupled cluster method restricted to single and double excitations (EOM-CCSD) and singlet excited states is formulated in a basis of nonorthogonal local orbitals. In the calculation of excited states only electron promotions from localized molecular orbitals into subspaces (excitation domains) of the local basis are allowed, which strongly reduces the number of EOM-CCSD amplitudes to be optimized. Furthermore, double excitations are neglected unless the excitation domains of the corresponding localized occupied orbitals are close to each other. Unlike in the local methods for the ground state, the excitation domains cannot be simply restricted to the atomic orbitals that are spatially close to the localized occupied orbitals. In the present paper the choice of the excitation domains is based on the analysis of wave functions computed by more approximate (and cheaper) methods like, e.g., configuration-interaction singles. The effect of various local approximations is investigated in detail, and it is found that a balanced description of the local configuration spaces describing the ground and excited states is essential to obtain accurate results. Using a single set of parameters for a given basis set, test calculations with the local EOM-CCSD method were performed for 14 molecules and 49 electronically excited states. The excitation energies computed by the local EOM-CCSD method reproduce the conventional EOM-CCSD excitation energies with an average error of 0.06 eV.

  15. Deep Voting: A Robust Approach Toward Nucleus Localization in Microscopy Images.

    PubMed

    Xie, Yuanpu; Kong, Xiangfei; Xing, Fuyong; Liu, Fujun; Su, Hai; Yang, Lin

    2015-10-01

    Robust and accurate nuclei localization in microscopy image can provide crucial clues for accurate computer-aid diagnosis. In this paper, we propose a convolutional neural network (CNN) based hough voting method to localize nucleus centroids with heavy cluttering and morphologic variations in microscopy images. Our method, which we name as deep voting, mainly consists of two steps. (1) Given an input image, our method assigns each local patch several pairs of voting offset vectors which indicate the positions it votes to, and the corresponding voting confidence (used to weight each votes), our model can be viewed as an implicit hough-voting codebook. (2) We collect the weighted votes from all the testing patches and compute the final voting density map in a way similar to Parzen-window estimation. The final nucleus positions are identified by searching the local maxima of the density map. Our method only requires a few annotation efforts (just one click near the nucleus center). Experiment results on Neuroendocrine Tumor (NET) microscopy images proves the proposed method to be state-of-the-art.

  16. A field theoretical approach to the quasi-continuum method

    NASA Astrophysics Data System (ADS)

    Iyer, Mrinal; Gavini, Vikram

    2011-08-01

    The quasi-continuum method has provided many insights into the behavior of lattice defects in the past decade. However, recent numerical analysis suggests that the approximations introduced in various formulations of the quasi-continuum method lead to inconsistencies—namely, appearance of ghost forces or residual forces, non-conservative nature of approximate forces, etc.—which affect the numerical accuracy and stability of the method. In this work, we identify the source of these errors to be the incompatibility of using quadrature rules, which is a local notion, on a non-local representation of energy. We eliminate these errors by first reformulating the extended interatomic interactions into a local variational problem that describes the energy of a system via potential fields. We subsequently introduce the quasi-continuum reduction of these potential fields using an adaptive finite-element discretization of the formulation. We demonstrate that the present formulation resolves the inconsistencies present in previous formulations of the quasi-continuum method, and show using numerical examples the remarkable improvement in the accuracy of solutions. Further, this field theoretic formulation of quasi-continuum method makes mathematical analysis of the method more amenable using functional analysis and homogenization theories.

  17. A new optimization approach for the calibration of an ultrasound probe using a 3D optical localizer.

    PubMed

    Dardenne, G; Cano, J D Gil; Hamitouche, C; Stindel, E; Roux, C

    2007-01-01

    This paper describes a fast procedure for the calibration of an ultrasound (US) probe using a 3D optical localizer. This calibration step allows us to obtain the 3D position of any point located on the 2D ultrasonic (US) image. To carry out correctly this procedure, a phantom of known geometric properties is probed and these geometries are found in the US images. A segmentation step is applied in order to obtain automatically the needed information in the US images and then, an optimization approach is performed to find the optimal calibration parameters. A new optimization method to estimate the calibration parameters for an ultrasound (US) probe is developed.

  18. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

    SciTech Connect

    Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V. E-mail: candler@aem.umn.edu; Truhlar, Donald G. E-mail: candler@aem.umn.edu

    2014-02-07

    Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with a review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.

  19. Global/local methods research using a common structural analysis framework

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. H., Jr.; Thompson, Danniella M.

    1991-01-01

    Methodologies for global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.

  20. A Local Discontinuous Galerkin Method for KdV-type Equations

    DTIC Science & Technology

    2001-06-01

    convection alone. This is a big advantage of the scheme over the traditional \\ mixed methods ", and it is the reason that the scheme is termed local...same order of accuracy, thus matching the advantage of traditional \\ mixed methods " on this. The purpose of this paper is to develop a similar local

  1. A Coproduction Community Based Approach to Reducing Smoking Prevalence in a Local Community Setting

    PubMed Central

    McGeechan, G. J.; Woodall, D.; Anderson, L.; Wilson, L.; O'Neill, G.; Newbury-Birch, D.

    2016-01-01

    Research highlights that asset-based community development where local residents become equal partners in service development may help promote health and well-being. This paper outlines baseline results of a coproduction evaluation of an asset-based approach to improving health and well-being within a small community through promoting tobacco control. Local residents were recruited and trained as community researchers to deliver a smoking prevalence survey within their local community and became local health champions, promoting health and well-being. The results of the survey will be used to inform health promotion activities within the community. The local smoking prevalence was higher than the regional and national averages. Half of the households surveyed had at least one smoker, and 63.1% of children lived in a smoking household. Nonsmokers reported higher well-being than smokers; however, the differences were not significant. Whilst the community has a high smoking prevalence, more than half of the smokers surveyed would consider quitting. Providing smoking cessation advice in GP surgeries may help reduce smoking prevalence in this community. Work in the area could be done to reduce children's exposure to smoking in the home. PMID:27446219

  2. Assessment of Social Vulnerability Identification at Local Level around Merapi Volcano - A Self Organizing Map Approach

    NASA Astrophysics Data System (ADS)

    Lee, S.; Maharani, Y. N.; Ki, S. J.

    2015-12-01

    The application of Self-Organizing Map (SOM) to analyze social vulnerability to recognize the resilience within sites is a challenging tasks. The aim of this study is to propose a computational method to identify the sites according to their similarity and to determine the most relevant variables to characterize the social vulnerability in each cluster. For this purposes, SOM is considered as an effective platform for analysis of high dimensional data. By considering the cluster structure, the characteristic of social vulnerability of the sites identification can be fully understand. In this study, the social vulnerability variable is constructed from 17 variables, i.e. 12 independent variables which represent the socio-economic concepts and 5 dependent variables which represent the damage and losses due to Merapi eruption in 2010. These variables collectively represent the local situation of the study area, based on conducted fieldwork on September 2013. By using both independent and dependent variables, we can identify if the social vulnerability is reflected onto the actual situation, in this case, Merapi eruption 2010. However, social vulnerability analysis in the local communities consists of a number of variables that represent their socio-economic condition. Some of variables employed in this study might be more or less redundant. Therefore, SOM is used to reduce the redundant variable(s) by selecting the representative variables using the component planes and correlation coefficient between variables in order to find the effective sample size. Then, the selected dataset was effectively clustered according to their similarities. Finally, this approach can produce reliable estimates of clustering, recognize the most significant variables and could be useful for social vulnerability assessment, especially for the stakeholder as decision maker. This research was supported by a grant 'Development of Advanced Volcanic Disaster Response System considering

  3. Exploration of barriers and facilitators to publishing local public health findings: A mixed methods protocol

    PubMed Central

    Smith, Selina A.; Webb, Nancy C.; Blumenthal, Daniel S.; Willcox, Bobbie; Ballance, Darra; Kinard, Faith; Gates, Madison L.

    2016-01-01

    Background Worldwide, the US accounts for a large proportion of journals related to public health. Although the American Public Health Association (APHA) includes 54 affiliated regional and state associations, little is known about their capacity to support public health scholarship. The aim of this study is to assess barriers and facilitators to operation of state journals for the dissemination of local public health research and practices. Methods A mixed methods approach will be used to complete the 12-month study. Affiliate websites will be accessed through the APHA membership portal to evaluate organizational infrastructure and ascertain the presence/absence of a journal. The leader of each affiliate will be contacted via email containing a link to a 12-question on-line survey to collect his/her perceptions of scholarly journals and the publication of local health data. To determine barriers and facilitators to publication of local public health findings, 30-minute semi-structured telephone interviews will focus on the infrastructure of the association, perceptions of the leader about the journal (if in place), and its operation. Anticipated Results We anticipate that 54 affiliate websites will be reviewed to complete the extraction checklist, that 74% of affiliate leaders will respond to the survey, and that 11 semi-structured interviews will be conducted. A limited number of state/regional public health associations will operate journals and a small percentage of those without journals may express an interest in implementing them. Barriers to operation of journals may include lack of resources (i.e., personnel, funding), and low prioritization of publication of state and local public health findings. Facilitators may include strong affiliate-academic relationships, affiliate leadership with experience in publications, and affiliate relationships with state and local departments of health. Conclusions The research proposed in this protocol may stimulate other

  4. Multidimensional Programming Methods for Energy Facility Siting: Alternative Approaches

    NASA Technical Reports Server (NTRS)

    Solomon, B. D.; Haynes, K. E.

    1982-01-01

    The use of multidimensional optimization methods in solving power plant siting problems, which are characterized by several conflicting, noncommensurable objectives is addressed. After a discussion of data requirements and exclusionary site screening methods for bounding the decision space, classes of multiobjective and goal programming models are discussed in the context of finite site selection. Advantages and limitations of these approaches are highlighted and the linkage of multidimensional methods with the subjective, behavioral components of the power plant siting process is emphasized.

  5. Localizing true brain interactions from EEG and MEG data with subspace methods and modified beamformers.

    PubMed

    Shahbazi Avarvand, Forooz; Ewald, Arne; Nolte, Guido

    2012-01-01

    To address the problem of mixing in EEG or MEG connectivity analysis we exploit that noninteracting brain sources do not contribute systematically to the imaginary part of the cross-spectrum. Firstly, we propose to apply the existing subspace method "RAP-MUSIC" to the subspace found from the dominant singular vectors of the imaginary part of the cross-spectrum rather than to the conventionally used covariance matrix. Secondly, to estimate the specific sources interacting with each other, we use a modified LCMV-beamformer approach in which the source direction for each voxel was determined by maximizing the imaginary coherence with respect to a given reference. These two methods are applicable in this form only if the number of interacting sources is even, because odd-dimensional subspaces collapse to even-dimensional ones. Simulations show that (a) RAP-MUSIC based on the imaginary part of the cross-spectrum accurately finds the correct source locations, that (b) conventional RAP-MUSIC fails to do so since it is highly influenced by noninteracting sources, and that (c) the second method correctly identifies those sources which are interacting with the reference. The methods are also applied to real data for a motor paradigm, resulting in the localization of four interacting sources presumably in sensory-motor areas.

  6. Mixed meshless local Petrov-Galerkin collocation method for modeling of material discontinuity

    NASA Astrophysics Data System (ADS)

    Jalušić, Boris; Sorić, Jurica; Jarak, Tomislav

    2017-01-01

    A mixed meshless local Petrov-Galerkin (MLPG) collocation method is proposed for solving the two-dimensional boundary value problem of heterogeneous structures. The heterogeneous structures are defined by partitioning the total material domain into subdomains with different linear-elastic isotropic properties which define homogeneous materials. The discretization and approximation of unknown field variables is done for each homogeneous material independently, therein the interface of the homogeneous materials is discretized with overlapping nodes. For the approximation, the moving least square method with the imposed interpolation condition is utilized. The solution for the entire heterogeneous structure is obtained by enforcing displacement continuity and traction reciprocity conditions at the nodes representing the interface boundary. The accuracy and numerical efficiency of the proposed mixed MLPG collocation method is demonstrated by numerical examples. The obtained results are compared with a standard fully displacement (primal) meshless approach as well as with available analytical and numerical solutions. Excellent agreement of the solutions is exhibited and a more robust, superior and stable modeling of material discontinuity is achieved using the mixed method.

  7. A finite Reynolds number approach for the prediction of boundary layer receptivity in localized regions

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Street, Craig L.

    1991-01-01

    Previous theoretical work on the boundary layer receptivity problem has utilized large Reynolds number asymptotic theories, thus being limited to a narrow part of the frequency - Reynolds number domain. An alternative approach is presented for the prediction of localized instability generation which has a general applicability, and also accounts for finite Reynolds number effects. This approach is illustrated for the case of Tollmien-Schlichting wave generation in a Blasius boundary layer due to the interaction of a free stream acoustic wave with a region of short scale variation in the surface boundary condition. The specific types of wall inhomogeneities studied are: regions of short scale variations in wall suction, wall admittance, and wall geometry (roughness). Extensive comparison is made between the results of the finite Reynolds number approach and previous asymptotic predictions, which also suggests an alternative way of using the latter at Reynolds numbers of interest in practice.

  8. Millimeter-Wave Localizers for Aircraft-to-Aircraft Approach Navigation

    NASA Technical Reports Server (NTRS)

    Tang, Adrian J.

    2013-01-01

    Aerial refueling technology for both manned and unmanned aircraft is critical for operations where extended aircraft flight time is required. Existing refueling assets are typically manned aircraft, which couple to a second aircraft through the use of a refueling boom. Alignment and mating of the two aircraft continues to rely on human control with use of high-resolution cameras. With the recent advances in unmanned aircraft, it would be highly advantageous to remove/reduce human control from the refueling process, simplifying the amount of remote mission management and enabling new operational scenarios. Existing aerial refueling uses a camera, making it non-autonomous and prone to human error. Existing commercial localizer technology has proven robust and reliable, but not suited for aircraft-to-aircraft approaches like in aerial refueling scenarios since the resolution is too coarse (approximately one meter). A localizer approach system for aircraft-to-aircraft docking can be constructed using the same modulation with a millimeterwave carrier to provide high resolution. One technology used to remotely align commercial aircraft on approach to a runway are ILS (instrument landing systems). ILS have been in service within the U.S. for almost 50 years. In a commercial ILS, two partially overlapping beams of UHF (109 to 126 MHz) are broadcast from an antenna array so that their overlapping region defines the centerline of the runway. This is called a localizer system and is responsible for horizontal alignment of the approach. One beam is modulated with a 150-Hz tone, while the other with a 90-Hz tone. Through comparison of the modulation depths of both tones, an autopilot system aligns the approaching aircraft with the runway centerline. A similar system called a glide-slope (GS) exists in the 320-to-330MHz band for vertical alignment of the approach. While this technology has been proven reliable for millions of commercial flights annually, its UHF nature limits

  9. The regional approach and regional studies method in the process of geography teaching

    NASA Astrophysics Data System (ADS)

    Dermendzhieva, Stela; Doikov, Martin

    2017-03-01

    We define the regional approach as a manner of relations among the global trends of development of the "Society-man-nature" system and the local differentiating level of knowledge. Conditionally, interactions interlace under the influence of the character of Geography as a science, education, approaches, goals and teaching methods. Global, national and local development differentiates in three concentric circles at the level of knowledge. It is determined as a conception of modern, complex and effective mechanism for young people, through which knowledge develops in regional historical and cultural perspective; self-consciousness for socio-economic and cultural integration is formed as a part of the. historical-geographical image of the native land. This way an attitude to the. native land is formed as a connecting construct between patriotism to the motherland and the same in global aspect. The possibility for integration and cooperation of the educative geographical content with all the local historical-geographical, regional, profession orientating, artistic, municipal and district institutions, is outlined. Contemporary geographical education appears to be a powerful and indispensable mechanism for organization of human sciences, while the regional approach and the application of the regional studies method stimulate and motivate the development and realization of optimal capacities for direct connection with the local structures and environments.

  10. Immuno- and correlative light microscopy-electron tomography methods for 3D protein localization in yeast.

    PubMed

    Mari, Muriel; Geerts, Willie J C; Reggiori, Fulvio

    2014-10-01

    Compartmentalization of eukaryotic cells is created and maintained through membrane rearrangements that include membrane transport and organelle biogenesis. Three-dimensional reconstructions with nanoscale resolution in combination with protein localization are essential for an accurate molecular dissection of these processes. The yeast Saccharomyces cerevisiae is a key model system for identifying genes and characterizing pathways essential for the organization of cellular ultrastructures. Electron microscopy studies of yeast, however, have been hampered by the presence of a cell wall that obstructs penetration of resins and cryoprotectants, and by the protein dense cytoplasm, which obscures the membrane details. Here we present an immuno-electron tomography (IET) method, which allows the determination of protein distribution patterns on reconstructed organelles from yeast. In addition, we extend this IET approach into a correlative light microscopy-electron tomography procedure where structures positive for a specific protein localized through a fluorescent signal are resolved in 3D. These new investigative tools for yeast will help to advance our understanding of the endomembrane system organization in eukaryotic cells.

  11. Methods for measuring denitrification: Diverse approaches to a difficult problem

    USGS Publications Warehouse

    Groffman, Peter M; Altabet, Mary A.; Böhlke, J.K.; Butterbach-Bahl, Klaus; David, Mary B.; Firestone, Mary K.; Giblin, Anne E.; Kana, Todd M.; Nielsen , Lars Peter; Voytek, Mary A.

    2006-01-01

    Denitrification, the reduction of the nitrogen (N) oxides, nitrate (NO3−) and nitrite (NO2−), to the gases nitric oxide (NO), nitrous oxide (N2O), and dinitrogen (N2), is important to primary production, water quality, and the chemistry and physics of the atmosphere at ecosystem, landscape, regional, and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments and discuss the strengths, weaknesses, and future prospects for the different methods. Methodological approaches covered include (1) acetylene-based methods, (2) 15N tracers, (3) direct N2 quantification, (4) N2:Ar ratio quantification, (5) mass balance approaches, (6) stoichiometric approaches, (7) methods based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass balance and stoichiometric approaches that constrain estimates of denitrification at large scales with point measurements (made using multiple methods), in multiple systems, is likely to propel more improvement in denitrification methods over the next few years.

  12. New approaches for automatic threedimensional source localization of acoustic emissions--Applications to concrete specimens.

    PubMed

    Kurz, Jochen H

    2015-12-01

    The task of locating a source in space by measuring travel time differences of elastic or electromagnetic waves from the source to several sensors is evident in varying fields. The new concepts of automatic acoustic emission localization presented in this article are based on developments from geodesy and seismology. A detailed description of source location determination in space is given with the focus on acoustic emission data from concrete specimens. Direct and iterative solvers are compared. A concept based on direct solvers from geodesy extended by a statistical approach is described which allows a stable source location determination even for partly erroneous onset times. The developed approach is validated with acoustic emission data from a large specimen leading to travel paths up to 1m and therefore to noisy data with errors in the determined onsets. The adaption of the algorithms from geodesy to the localization procedure of sources of elastic waves offers new possibilities concerning stability, automation and performance of localization results. Fracture processes can be assessed more accurately.

  13. Mapping Patterns of Local Recurrence After Pancreaticoduodenectomy for Pancreatic Adenocarcinoma: A New Approach to Adjuvant Radiation Field Design

    SciTech Connect

    Dholakia, Avani S.; Kumar, Rachit; Raman, Siva P.; Moore, Joseph A.; Ellsworth, Susannah; McNutt, Todd; Laheru, Daniel A.; Jaffee, Elizabeth; Cameron, John L.; Tran, Phuoc T.; Hobbs, Robert F.; Wolfgang, Christopher L.; and others

    2013-12-01

    Purpose: To generate a map of local recurrences after pancreaticoduodenectomy (PD) for patients with resectable pancreatic ductal adenocarcinoma (PDA) and to model an adjuvant radiation therapy planning treatment volume (PTV) that encompasses a majority of local recurrences. Methods and Materials: Consecutive patients with resectable PDA undergoing PD and 1 or more computed tomography (CT) scans more than 60 days after PD at our institution were reviewed. Patients were divided into 3 groups: no adjuvant treatment (NA), chemotherapy alone (CTA), or chemoradiation (CRT). Cross-sectional scans were centrally reviewed, and local recurrences were plotted to scale with respect to the celiac axis (CA), superior mesenteric artery (SMA), and renal veins on 1 CT scan of a template post-PD patient. An adjuvant clinical treatment volume comprising 90% of local failures based on standard expansions of the CA and SMA was created and simulated on 3 post-PD CT scans to assess the feasibility of this planning approach. Results: Of the 202 patients in the study, 40 (20%), 34 (17%), and 128 (63%) received NA, CTA, and CRT adjuvant therapy, respectively. The rate of margin-positive resections was greater in CRT patients than in CTA patients (28% vs 9%, P=.023). Local recurrence occurred in 90 of the 202 patients overall (45%) and in 19 (48%), 22 (65%), and 49 (38%) in the NA, CTA, and CRT groups, respectively. Ninety percent of recurrences were within a 3.0-cm right-lateral, 2.0-cm left-lateral, 1.5-cm anterior, 1.0-cm posterior, 1.0-cm superior, and 2.0-cm inferior expansion of the combined CA and SMA contours. Three simulated radiation treatment plans using these expansions with adjustments to avoid nearby structures were created to demonstrate the use of this treatment volume. Conclusions: Modified PTVs targeting high-risk areas may improve local control while minimizing toxicities, allowing dose escalation with intensity-modulated or stereotactic body radiation therapy.

  14. Does super-resolution fluorescence microscopy obsolete previous microscopic approaches to protein co-localization?

    PubMed

    MacDonald, Laura; Baldini, Giulia; Storrie, Brian

    2015-01-01

    Conventional microscopy techniques, namely, the confocal microscope or deconvolution processes, are resolution limited to approximately 200-250 nm by the diffraction properties of light as developed by Ernst Abbe in 1873. This diffraction limit is appreciably above the size of most multi-protein complexes, which are typically 20-50 nm in diameter. In the mid-2000s, biophysicists moved beyond the diffraction barrier by structuring the illumination pattern and then applying mathematical principles and algorithms to allow a resolution of approximately 100 nm, sufficient to address protein subcellular co-localization questions. This "breaking" of the diffraction barrier, affording resolution beyond 200 nm, is termed super-resolution microscopy. More recent approaches include single-molecule localization (such as photoactivated localization microscopy (PALM)/stochastic optical reconstruction microscopy (STORM)) and point spread function engineering (such as stimulated emission depletion (STED) microscopy). In this review, we explain basic principles behind currently commercialized super-resolution setups and address advantages and considerations in applying these techniques to protein co-localization in biological systems.

  15. Service Areas of Local Urban Green Spaces: AN Explorative Approach in Arroios, Lisbon

    NASA Astrophysics Data System (ADS)

    Figueiredo, R.; Gonçalves, A. B.; Ramos, I. L.

    2016-09-01

    The identification of service areas of urban green spaces and areas with lack of these is increasingly necessary within city planning and management, as it translates into important indicators for the assessment of quality of life. In this setting, it is important to evaluate the attractiveness and accessibility dynamics through a set of attributes, taking into account the local reality of the territory under study. This work presents an operational methodology associated with these dynamics in local urban green spaces, assisting in the planning and management of this type of facilities. The methodology is supported firstly on questionnaire surveys and then on network analysis, processing spatial data in a Geographic Information Systems (GIS) environment. In the case study, two local green spaces in Lisbon were selected, on a local perspective explorative approach. Through field data, it was possible to identify service areas for both spaces, and compare the results with references in the literature. It was also possible to recognise areas with lack of these spaces. The difficulty to evaluate the dynamics of real individuals in their choices of urban green spaces and the respective route is a major challenge to the application of the methodology. In this sense it becomes imperative to develop different instruments and adapt them to other types of urban green spaces.

  16. Global, national, and local approaches to mental health: examples from India.

    PubMed

    Weiss, M G; Isaac, M; Parkar, S R; Chowdhury, A N; Raguram, R

    2001-01-01

    Neuropsychiatric disorders and suicide amount to 12.7% of the global burden of disease and related conditions (GBD) according to World Health Organization (WHO) estimates for 1999, and recognition of the enormous component of mental illness in the GBD has attracted unprecedented attention in the field of international health. Focusing on low- and middle-income countries with high adult mortality, this article discusses essential functions of international agencies concerned with mental health. A review of the history and development of national mental health policy in India follows, and local case studies consider the approach to planning in a rural mental health programme in West Bengal and the experience in an established urban mental health programme in a low-income community of Mumbai. Local programmes must be attentive to the needs of the communities they serve, and they require the support of global and national policy for resources and the conceptual tools to formulate strategies to meet those needs. National programmes retain major responsibilities for the health of their country's population: they are the portals through which global and local interests, ideas, and policies formally interact. International priorities should be responsive to a wide range of national interests, which in turn should be sensitive to diverse local experiences. Mental health actions thereby benefit from the synergy of informed and effective policy at each level.

  17. Modified approach for extraperitoneal laparoscopic staging for locally advanced cervical cancer.

    PubMed

    Gil-Moreno, A; Maffuz, A; Díaz-Feijoo, B; Puig, O; Martínez-Palones, J M; Pérez, A; García, A; Xercavins, J

    2007-12-01

    Describe a modified approach to the technique for staging laparoscopic extraperitoneal aortic and common iliac lymph node dissection for locally advanced cervical cancer.Retrospective, nonrandomized clinical study. (Canadian Task Force classification II-2), setting in an acute-care, teaching hospital. Thirty-six patients with locally advanced cervical cancer underwent laparoscopic surgical staging via extraperitoneal approach with the conventional or the modified technique from August 2001 through September 2004. Clinical outcomes in 23 patients who were operated on with the conventional technique using index finger for first trocar entrance; 12 patients with the modified technique using direct trocar entrance, were compared. One patient was excluded due to peritoneal carcinomatosis. Technique, baseline characteristics, histopathologic variables and surgical outcome were measured. There were no significant differences in patients basal characteristics on comparative analysis between conventional and modified technique. With our proposed modified technique, we obtained a reduced surgical procedure duration and blood loss. The proposed modified surgical technique offers some advantages, is an easier approach because the parietal pelvic peritoneum is elastic and this helps to avoid its disruption at time of trocar insertion, size of incision is shorter, we achieved no CO2 leak through the trocar orifice, and wound suture is fast and simple.

  18. Stress and strain localization in stretched collagenous tissues via a multiscale modelling approach.

    PubMed

    Marino, Michele; Vairo, Giuseppe

    2014-01-01

    Mechanobiology of cells in soft collagenous tissues is highly affected by both tissue response at the macroscale and stress/strain localization mechanisms due to features at lower scales. In this paper, the macroscale mechanical behaviour of soft collagenous tissues is modelled by a three-level multiscale approach, based on a multi-step homogenisation technique from nanoscale up to the macroscale. Nanoscale effects, related to both intermolecular cross-links and collagen mechanics, are accounted for, together with geometric nonlinearities at the microscale. Moreover, an effective submodelling procedure is conceived in order to evaluate the local stress and strain fields at the microscale, which is around and within cells. Numerical results, obtained by using an incremental finite element formulation and addressing stretched tendinous tissues, prove consistency and accuracy of the model at both macroscale and microscale, confirming also the effectiveness of the multiscale modelling concept for successfully analysing physiopathological processes in biological tissues.

  19. Vibration localization in mono- and bi-coupled bladed disks - A transfer matrix approach

    NASA Technical Reports Server (NTRS)

    Ottarsson, Gisli; Pierre, Christophe

    1993-01-01

    A transfer matrix approach to the analysis of the dynamics of mistuned bladed disks is presented. The study focuses on mono-coupled systems, in which each blade is coupled to its two neighboring blades, and bi-coupled systems, where each blade is coupled to its four nearest neighbors. Transfer matrices yield the free dynamics, both the characteristic free wave and the normal modes - in closed form for the tuned assemblies. Mistuned assemblies are represented by random transfer matrices and an examination of the effect of mistuning on harmonic wave propagation yields the localization factor - the average rate of spatial wave amplitude decay per blade - in the mono-coupled assembly. Based on a comparison of the wave propagation characteristics of the mono- and bi-coupled assemblies, important conclusions are drawn about the effect of the additional coupling coordinate on the sensitivity to mistuning and the strength of mode localization predicted by a mono-coupled analysis.

  20. Model-independent mean-field theory as a local method for approximate propagation of information.

    PubMed

    Haft, M; Hofmann, R; Tresp, V

    1999-02-01

    We present a systematic approach to mean-field theory (MFT) in a general probabilistic setting without assuming a particular model. The mean-field equations derived here may serve as a local, and thus very simple, method for approximate inference in probabilistic models such as Boltzmann machines or Bayesian networks. Our approach is 'model-independent' in the sense that we do not assume a particular type of dependences; in a Bayesian network, for example, we allow arbitrary tables to specify conditional dependences. In general, there are multiple solutions to the mean-field equations. We show that improved estimates can be obtained by forming a weighted mixture of the multiple mean-field solutions. Simple approximate expressions for the mixture weights are given. The general formalism derived so far is evaluated for the special case of Bayesian networks. The benefits of taking into account multiple solutions are demonstrated by using MFT for inference in a small and in a very large Bayesian network. The results are compared with the exact results.

  1. Optical coherence tomography as approach for the minimal invasive localization of the germinal disc in ovo before chicken sexing

    NASA Astrophysics Data System (ADS)

    Burkhardt, Anke; Geissler, Stefan; Koch, Edmund

    2010-04-01

    In most industrial states a huge amount of newly hatched male layer chickens are usually killed immediately after hatching by maceration or gassing. The reason for killing most of the male chickens of egg producing races is their slow growth rate compared to races specialized on meat production. When the egg has been laid, the egg contains already a small disc of cells on the surface of the yolk known as the blastoderm. This region is about 4 - 5 mm in diameter and contains the information whether the chick becomes male or female and hence allows sexing of the chicks by spectroscopy and other methods in the unincubated state. Different imaging methods like sonography, 3D-X-ray micro computed tomography and magnetic resonance imaging were used for localization of the blastoderm until now, but found to be impractical for different reasons. Optical coherence tomography (OCT) enables micrometer-scale, subsurface imaging of biological tissue and could therefore be a suitable technique for an accurate localization. The intention of this study is to prove if OCT can be an appropriate approach for the precise in ovo localization.

  2. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    DOE PAGES

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  3. Stellar mass functions: methods, systematics and results for the local Universe

    NASA Astrophysics Data System (ADS)

    Weigel, Anna K.; Schawinski, Kevin; Bruderer, Claudio

    2016-06-01

    We present a comprehensive method for determining stellar mass functions, and apply it to samples in the local Universe. We combine the classical 1/Vmax approach with STY, a parametric maximum likelihood method and step-wise maximum likelihood, a non-parametric maximum likelihood technique. In the parametric approach, we are assuming that the stellar mass function can be modelled by either a single or a double Schechter function and we use a likelihood ratio test to determine which model provides a better fit to the data. We discuss how the stellar mass completeness as a function of z biases the three estimators and how it can affect, especially the low-mass end of the stellar mass function. We apply our method to Sloan Digital Sky Survey DR7 data in the redshift range from 0.02 to 0.06. We find that the entire galaxy sample is best described by a double Schechter function with the following parameters: log (M*/M⊙) = 10.79 ± 0.01, log (Φ ^{{ast }}_1/h^3 Mpc^{-3}) = -3.31 ± 0.20, α1 = -1.69 ± 0.10, log (Φ ^{{ast }}_2/h^3 Mpc^{-3}) = -2.01 ± 0.28 and α2 = -0.79 ± 0.04. We also use morphological classifications from Galaxy Zoo and halo mass, overdensity, central/satellite, colour and specific star formation rate measurements to split the galaxy sample into over 130 subsamples. We determine and present the stellar mass functions and the best-fitting Schechter function parameters for each of these subsamples.

  4. Anchor-free localization method for mobile targets in coal mine wireless sensor networks.

    PubMed

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes' location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines.

  5. Major Approaches to Music Education: An Account of Method.

    ERIC Educational Resources Information Center

    Shehan, Patricia K.

    1986-01-01

    In a continuing effort to improve the music education of students in beginning stages, there is a need for the review of teaching techniques that motivate student learning behaviors. Historically, music methods actively engaged students in the music-making process. The approaches of Dalcroze, Orff, Suzuki, Kodaly, and Gordon continue that…

  6. The Feldenkrais Method: A Dynamic Approach to Changing Motor Behavior.

    ERIC Educational Resources Information Center

    Buchanan, Patricia A.; Ulrich, Beverly D.

    2001-01-01

    Describes the Feldenkrais Method of somatic education, noting parallels with a dynamic systems theory (DST) approach to motor behavior. Feldenkrais uses movement and perception to foster individualized improvement in function. DST explains that a human-environment system continually adapts to changing conditions and assembles behaviors…

  7. [Communicative approach of Situational Strategic Planning at the local level: health and equity in Venezuela].

    PubMed

    Heredia-Martínez, Henny Luz; Artmann, Elizabeth; Porto, Silvia Marta

    2010-06-01

    The article discusses the results of operationalizing Situational Strategic Planning adapted to the local level in health, considering the communicative approach and equity in a parish in Venezuela. Two innovative criteria were used: estimated health needs and analysis of the actors' potential for participation. The problems identified were compared to the corresponding article on rights in the Venezuelan Constitution. The study measured inequalities using health indicators associated with the selected problems; equity criteria were incorporated into the action proposals and communicative elements. Priority was assigned to the problem of "low case-resolving capacity in the health services network", and five critical points were selected for the action plan, which finally consisted of 6 operations and 21 actions. The article concludes that the combination of epidemiology and planning expands the situational explanation. Incorporation of the communicative approach and the equity dimension into Situational Strategic Planning allows empowering health management and helps decrease the gaps from inequality.

  8. Frequency-domain localization of alpha rhythm in humans via a maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Patel, Pankaj; Khosla, Deepak; Al-Dayeh, Louai; Singh, Manbir

    1997-05-01

    Generators of spontaneous human brain activity such as alpha rhythm may be easier and more accurate to localize in frequency-domain than in time-domain since these generators are characterized by a specific frequency range. We carried out a frequency-domain analysis of synchronous alpha sources by generating equivalent potential maps using the Fourier transform of each channel of electro-encephalographic (EEG) recordings. SInce the alpha rhythm recorded by EEG scalp measurements is probably produced by several independent generators, a distributed source imaging approach was considered more appropriate than a model based on a single equivalent current dipole. We used an imaging approach based on a Bayesian maximum entropy technique. Reconstructed sources were superposed on corresponding anatomy form magnetic resonance imaging. Results from human studies suggest that reconstructed sources responsible for alpha rhythm are mainly located in the occipital and parieto- occipital lobes.

  9. The Health Role of Local Area Coordinators in Scotland: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Brown, Michael; Karatzias, Thanos; O'Leary, Lisa

    2013-01-01

    The study set out to explore whether local area coordinators (LACs) and their managers view the health role of LACs as an essential component of their work and identify the health-related activities undertaken by LACs in Scotland. A mixed methods cross-sectional phenomenological study involving local authority service managers (n = 25) and LACs (n…

  10. Estimation of local scale dispersion from local breakthrough curves during a tracer test in a heterogeneous aquifer: the Lagrangian approach.

    PubMed

    Vanderborght, Jan; Vereecken, Harry

    2002-01-01

    The local scale dispersion tensor, Dd, is a controlling parameter for the dilution of concentrations in a solute plume that is displaced by groundwater flow in a heterogeneous aquifer. In this paper, we estimate the local scale dispersion from time series or breakthrough curves, BTCs, of Br concentrations that were measured at several points in a fluvial aquifer during a natural gradient tracer test at Krauthausen. Locally measured BTCs were characterized by equivalent convection dispersion parameters: equivalent velocity, v(eq)(x) and expected equivalent dispersivity, [lambda(eq)(x)]. A Lagrangian framework was used to approximately predict these equivalent parameters in terms of the spatial covariance of log(e) transformed conductivity and the local scale dispersion coefficient. The approximate Lagrangian theory illustrates that [lambda(eq)(x)] increases with increasing travel distance and is much larger than the local scale dispersivity, lambda(d). A sensitivity analysis indicates that [lambda(eq)(x)] is predominantly determined by the transverse component of the local scale dispersion and by the correlation scale of the hydraulic conductivity in the transverse to flow direction whereas it is relatively insensitive to the longitudinal component of the local scale dispersion. By comparing predicted [lambda(eq)(x)] for a range of Dd values with [lambda(eq)(x)] obtained from locally measured BTCs, the transverse component of Dd, DdT, was estimated. The estimated transverse local scale dispersivity, lambda(dT) = DdT/U1 (U1 = mean advection velocity) is in the order of 10(1)-10(2) mm, which is relatively large but realistic for the fluvial gravel sediments at Krauthausen.

  11. Localization of causal locus in the genome of the brown macroalga Ectocarpus: NGS-based mapping and positional cloning approaches

    PubMed Central

    Billoud, Bernard; Jouanno, Émilie; Nehr, Zofia; Carton, Baptiste; Rolland, Élodie; Chenivesse, Sabine; Charrier, Bénédicte

    2015-01-01

    Mutagenesis is the only process by which unpredicted biological gene function can be identified. Despite that several macroalgal developmental mutants have been generated, their causal mutation was never identified, because experimental conditions were not gathered at that time. Today, progresses in macroalgal genomics and judicious choices of suitable genetic models make mutated gene identification possible. This article presents a comparative study of two methods aiming at identifying a genetic locus in the brown alga Ectocarpus siliculosus: positional cloning and Next-Generation Sequencing (NGS)-based mapping. Once necessary preliminary experimental tools were gathered, we tested both analyses on an Ectocarpus morphogenetic mutant. We show how a narrower localization results from the combination of the two methods. Advantages and drawbacks of these two approaches as well as potential transfer to other macroalgae are discussed. PMID:25745426

  12. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  13. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGES

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  14. A local quasicontinuum method for 3D multilattice crystalline materials: Application to shape-memory alloys

    NASA Astrophysics Data System (ADS)

    Sorkin, V.; Elliott, R. S.; Tadmor, E. B.

    2014-07-01

    The quasicontinuum (QC) method, in its local (continuum) limit, is applied to materials with a multilattice crystal structure. Cauchy-Born (CB) kinematics, which accounts for the shifts of the crystal motif, is used to relate atomic motions to continuum deformation gradients. To avoid failures of CB kinematics, QC is augmented with a phonon stability analysis that detects lattice period extensions and identifies the minimum required periodic cell size. This approach is referred to as Cascading Cauchy-Born kinematics (CCB). In this paper, the method is described and developed. It is then used, along with an effective interaction potential (EIP) model for shape-memory alloys, to simulate the shape-memory effect and pseudoelasticity in a finite specimen. The results of these simulations show that (i) the CCB methodology is an essential tool that is required in order for QC-type simulations to correctly capture the first-order phase transitions responsible for these material behaviors, and (ii) that the EIP model adopted in this work coupled with the QC/CCB methodology is capable of predicting the characteristic behavior found in shape-memory alloys.

  15. Finite Element approach for Density Functional Theory calculations on locally refined meshes

    SciTech Connect

    Fattebert, J; Hornung, R D; Wissink, A M

    2007-02-23

    We present a quadratic Finite Element approach to discretize the Kohn-Sham equations on structured non-uniform meshes. A multigrid FAC preconditioner is proposed to iteratively solve the equations by an accelerated steepest descent scheme. The method was implemented using SAMRAI, a parallel software infrastructure for general AMR applications. Examples of applications to small nanoclusters calculations are presented.

  16. Using archaeogenomic and computational approaches to unravel the history of local adaptation in crops

    PubMed Central

    Allaby, Robin G.; Gutaker, Rafal; Clarke, Andrew C.; Pearson, Neil; Ware, Roselyn; Palmer, Sarah A.; Kitchen, James L.; Smith, Oliver

    2015-01-01

    Our understanding of the evolution of domestication has changed radically in the past 10 years, from a relatively simplistic rapid origin scenario to a protracted complex process in which plants adapted to the human environment. The adaptation of plants continued as the human environment changed with the expansion of agriculture from its centres of origin. Using archaeogenomics and computational models, we can observe genome evolution directly and understand how plants adapted to the human environment and the regional conditions to which agriculture expanded. We have applied various archaeogenomics approaches as exemplars to study local adaptation of barley to drought resistance at Qasr Ibrim, Egypt. We show the utility of DNA capture, ancient RNA, methylation patterns and DNA from charred remains of archaeobotanical samples from low latitudes where preservation conditions restrict ancient DNA research to within a Holocene timescale. The genomic level of analyses that is now possible, and the complexity of the evolutionary process of local adaptation means that plant studies are set to move to the genome level, and account for the interaction of genes under selection in systems-level approaches. This way we can understand how plants adapted during the expansion of agriculture across many latitudes with rapidity. PMID:25487329

  17. A multilevel approach for minimum weight structural design including local and system buckling constraints

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Ramanathan, R. K.

    1977-01-01

    A rational multilevel approach for minimum weight structural design of truss and wing structures including local and system buckling constraints is presented. Overall proportioning of the structure is achieved at the system level subject to strength, displacement and system buckling constraints, while the detailed component designs are carried out separately at the component level satisfying local buckling constraints. Total structural weight is taken to be the objective function at the system level while employing the change in the equivalent system stiffness of the component as the component level objective function. Finite element analysis is used to predict static response while system buckling behavior is handled by incorporating a geometric stiffness matrix capability. Buckling load factors and the corresponding mode shapes are obtained by solving the eigenvalue problem associated with the assembled elastic stiffness and geometric stiffness matrices for the structural system. At the component level various local buckling failure modes are guarded against using semi-empirical formulas. Mathematical programming techniques are employed at both the system and component level.

  18. Comparative analysis of local and consensus quantitative structure-activity relationship approaches for the prediction of bioconcentration factor.

    PubMed

    Piir, G; Sild, S; Maran, U

    2013-01-01

    Quantitative structure-activity relationships (QSARs) are broadly classified as global or local, depending on their molecular constitution. Global models use large and diverse training sets covering a wide range of chemical space. Local models focus on smaller structurally or chemically similar subsets that are conventionally selected by human experts or alternatively using clustering analysis. The current study focuses on the comparative analysis of different clustering algorithms (expectation-maximization, K-means and hierarchical) for seven different descriptor sets as structural characteristics and two rule-based approaches to select subsets for designing local QSAR models. A total of 111 local QSAR models are developed for predicting bioconcentration factor. Predictions from local models were compared with corresponding predictions from the global model. The comparison of coefficients of determination (r(2)) and standard deviations for local models with similar subsets from the global model show improved prediction quality in 97% of cases. The descriptor content of derived QSARs is discussed and analyzed. Local QSAR models were further consolidated within the framework of consensus approach. All different consensus approaches increased performance over the global and local models. The consensus approach reduced the number of strongly deviating predictions by evening out prediction errors, which were produced by some local QSARs.

  19. Approach-Method Interaction: The Role of Teaching Method on the Effect of Context-Based Approach in Physics Instruction

    ERIC Educational Resources Information Center

    Pesman, Haki; Ozdemir, Omer Faruk

    2012-01-01

    The purpose of this study is to explore not only the effect of context-based physics instruction on students' achievement and motivation in physics, but also how the use of different teaching methods influences it (interaction effect). Therefore, two two-level-independent variables were defined, teaching approach (contextual and non-contextual…

  20. An Automated Approach for Localizing Retinal Blood Vessels in Confocal Scanning Laser Ophthalmoscopy Fundus Images.

    PubMed

    Kromer, Robert; Shafin, Rahman; Boelefahr, Sebastian; Klemm, Maren

    In this work, we present a rules-based method for localizing retinal blood vessels in confocal scanning laser ophthalmoscopy (cSLO) images and evaluate its feasibility. A total of 31 healthy participants (17 female; mean age: 64.0 ± 8.2 years) were studied using manual and automatic segmentation. High-resolution peripapillary scan acquisition cSLO images were acquired. The automated segmentation method consisted of image pre-processing for gray-level homogenization and blood vessel enhancement (morphological opening operation, Gaussian filter, morphological Top-Hat transformation), binary thresholding (entropy-based thresholding operation), and removal of falsely detected isolated vessel pixels. The proposed algorithm was first tested on the publically available dataset DRIVE, which contains color fundus photographs, and compared to performance results from the literature. Good results were obtained. Monochromatic cSLO images segmented using the proposed method were compared to those manually segmented by two independent observers. For the algorithm, a sensitivity of 0.7542, specificity of 0.8607, and accuracy of 0.8508 were obtained. For the two independent observers, a sensitivity of 0.6579, specificity of 0.9699, and accuracy of 0.9401 were obtained. The results demonstrate that it is possible to localize vessels in monochromatic cSLO images of the retina using a rules-based approach. The performance results are inferior to those obtained using fundus photography, which could be due to the nature of the technology.

  1. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    PubMed

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  2. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  3. Detection of Localized Heat Damage in a Polymer Matrix Composite by Thermo-Elastic Method (Preprint)

    DTIC Science & Technology

    2007-02-01

    AFRL-ML-WP-TP-2007-437 DETECTION OF LOCALIZED HEAT DAMAGE IN A POLYMER MATRIX COMPOSITE BY THERMO-ELASTIC METHOD (PREPRINT) John Welter...GRANT NUMBER 4. TITLE AND SUBTITLE DETECTION OF LOCALIZED HEAT DAMAGE IN A POLYMER MATRIX COMPOSITE BY THERMO-ELASTIC METHOD (PREPRINT) 5c...Include Area Code) N/A Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39-18 1 DETECTION OF LOCALIZED HEAT DAMAGE IN A POLYMER MATRIX COMPOSITE BY

  4. An automated local and regional seismic event location method based on waveform stacking

    NASA Astrophysics Data System (ADS)

    Grigoli, F.; Cesca, S.; Dahm, T.

    2013-12-01

    Seismic event location using automated procedures is a very important task in microseismic monitoring as well as within early warning applications. Increasingly large datasets recorded by dense network has recently favoured the development of different automated location methods. These methods are requested to be noise robust, since microseismic records are often characterized by a low signal-to-noise ratios. Most of the aforementioned standard automated location routines rely on automated phase picking and seismic phases identification (generally only P and S) and are generally based on the minimization of the residuals between the theoretical and observed arrival times of the main seismic phases. While different developed approaches allow to accurately pick P onsets, the automatic picking of the S onsets is still challenging, and posing a significant limit to the location performance. We present here a picking free location method based on the use of different characteristic functions, able to identify P and S phases. Both characteristic functions are based on the Short-Term-Average/Long-Term-Average (STA/LTA) traces. For P phases, we use as characteristic function the STA/LTA trace of the vertical energy function, whereas for the S phases we use the STA/LTA traces of a function obtained using the Principal Component Analysis (PCA) technique. In order to locate a seismic event, the space of possible locations is scanned and both P and S characteristic functions are stacked along travel time surfaces corresponding to the selected hypocenter. Iterating this procedure on a three-dimensional grid we retrieve a multidimensional matrix whose absolute maximum corresponds tot he coordinates of the seismic event. We show the performance of our method with different applications, at different scales: 1) s set of low magnitude events recorded by a local network in southern Italy and 2) a set of seismic events recorded by a regional seismic network in Turkey. This work has

  5. Omni-Directional Scanning Localization Method of a Mobile Robot Based on Ultrasonic Sensors.

    PubMed

    Mu, Wei-Yi; Zhang, Guang-Peng; Huang, Yu-Mei; Yang, Xin-Gang; Liu, Hong-Yan; Yan, Wen

    2016-12-20

    Improved ranging accuracy is obtained by the development of a novel ultrasonic sensor ranging algorithm, unlike the conventional ranging algorithm, which considers the divergence angle and the incidence angle of the ultrasonic sensor synchronously. An ultrasonic sensor scanning method is developed based on this algorithm for the recognition of an inclined plate and to obtain the localization of the ultrasonic sensor relative to the inclined plate reference frame. The ultrasonic sensor scanning method is then leveraged for the omni-directional localization of a mobile robot, where the ultrasonic sensors are installed on a mobile robot and follow the spin of the robot, the inclined plate is recognized and the position and posture of the robot are acquired with respect to the coordinate system of the inclined plate, realizing the localization of the robot. Finally, the localization method is implemented into an omni-directional scanning localization experiment with the independently researched and developed mobile robot. Localization accuracies of up to ±3.33 mm for the front, up to ±6.21 for the lateral and up to ±0.20° for the posture are obtained, verifying the correctness and effectiveness of the proposed localization method.

  6. Omni-Directional Scanning Localization Method of a Mobile Robot Based on Ultrasonic Sensors

    PubMed Central

    Mu, Wei-Yi; Zhang, Guang-Peng; Huang, Yu-Mei; Yang, Xin-Gang; Liu, Hong-Yan; Yan, Wen

    2016-01-01

    Improved ranging accuracy is obtained by the development of a novel ultrasonic sensor ranging algorithm, unlike the conventional ranging algorithm, which considers the divergence angle and the incidence angle of the ultrasonic sensor synchronously. An ultrasonic sensor scanning method is developed based on this algorithm for the recognition of an inclined plate and to obtain the localization of the ultrasonic sensor relative to the inclined plate reference frame. The ultrasonic sensor scanning method is then leveraged for the omni-directional localization of a mobile robot, where the ultrasonic sensors are installed on a mobile robot and follow the spin of the robot, the inclined plate is recognized and the position and posture of the robot are acquired with respect to the coordinate system of the inclined plate, realizing the localization of the robot. Finally, the localization method is implemented into an omni-directional scanning localization experiment with the independently researched and developed mobile robot. Localization accuracies of up to ±3.33 mm for the front, up to ±6.21 for the lateral and up to ±0.20° for the posture are obtained, verifying the correctness and effectiveness of the proposed localization method. PMID:27999396

  7. Comprehensive approach to breast cancer detection using light: photon localization by ultrasound modulation and tissue characterization by spectral discrimination

    NASA Astrophysics Data System (ADS)

    Marks, Fay A.; Tomlinson, Harold W.; Brooksby, Glen W.

    1993-09-01

    A new technique called Ultrasound Tagging of Light (UTL) for imaging breast tissue is described. In this approach, photon localization in turbid tissue is achieved by cross- modulating a laser beam with focussed, pulsed ultrasound. Light which passes through the ultrasound focal spot is `tagged' with the frequency of the ultrasound pulse. The experimental system uses an Argon-Ion laser, a single PIN photodetector, and a 1 MHz fixed-focus pulsed ultrasound transducer. The utility of UTL as a photon localization technique in scattering media is examined using tissue phantoms consisting of gelatin and intralipid. In a separate study, in vivo optical reflectance spectrophotometry was performed on human breast tumors implanted intramuscularly and subcutaneously in nineteen nude mice. The validity of applying a quadruple wavelength breast cancer discrimination metric (developed using breast biopsy specimens) to the in vivo condition was tested. A scatter diagram for the in vivo model tumors based on this metric is presented using as the `normal' controls the hands and fingers of volunteers. Tumors at different growth stages were studied; these tumors ranged in size from a few millimeters to two centimeters. It is expected that when coupled with a suitable photon localization technique like UTL, spectral discrimination methods like this one will prove useful in the detection of breast cancer by non-ionizing means.

  8. An approach for the control method's determination for an interplanetary mission with solar sail

    NASA Astrophysics Data System (ADS)

    Gorbunova, Irina; Starinova, Olga

    2017-01-01

    This article is devoted to an interplanetary movement of the solar sail spacecraft. Authors propose to use locally-optimal control laws for the solar sail control model. We used previously obtained by the authors locally-optimal control laws for chosen interplanetary missions. The obtained laws can provide rapid change of Keplerian elements or stabilize its values. Authors offer an approach for combination of these laws. To confirm the result correctness authors simulated the heliocentric motion of the solar sail spacecraft to the selected planet. Model of solar sail spacecraft called Helios is designed by students of Samara State Aerospace University. We define heliocentric motion of the solar sail spacecraft via Keplerian elements. Authors used combination technique for locally-optimal control laws to obtained several trajectories for interplanetary missions. The distinctive feature of the proposed method that parameters of osculating elements are used as destination phase coordinates.

  9. Retinal vessel segmentation: an efficient graph cut approach with retinex and local phase.

    PubMed

    Zhao, Yitian; Liu, Yonghuai; Wu, Xiangqian; Harding, Simon P; Zheng, Yalin

    2015-01-01

    Our application concerns the automated detection of vessels in retinal images to improve understanding of the disease mechanism, diagnosis and treatment of retinal and a number of systemic diseases. We propose a new framework for segmenting retinal vasculatures with much improved accuracy and efficiency. The proposed framework consists of three technical components: Retinex-based image inhomogeneity correction, local phase-based vessel enhancement and graph cut-based active contour segmentation. These procedures are applied in the following order. Underpinned by the Retinex theory, the inhomogeneity correction step aims to address challenges presented by the image intensity inhomogeneities, and the relatively low contrast of thin vessels compared to the background. The local phase enhancement technique is employed to enhance vessels for its superiority in preserving the vessel edges. The graph cut-based active contour method is used for its efficiency and effectiveness in segmenting the vessels from the enhanced images using the local phase filter. We have demonstrated its performance by applying it to four public retinal image datasets (3 datasets of color fundus photography and 1 of fluorescein angiography). Statistical analysis demonstrates that each component of the framework can provide the level of performance expected. The proposed framework is compared with widely used unsupervised and supervised methods, showing that the overall framework outperforms its competitors. For example, the achieved sensitivity (0:744), specificity (0:978) and accuracy (0:953) for the DRIVE dataset are very close to those of the manual annotations obtained by the second observer.

  10. Beyond the Melnikov method: A computer assisted approach

    NASA Astrophysics Data System (ADS)

    Capiński, Maciej J.; Zgliczyński, Piotr

    2017-01-01

    We present a Melnikov type approach for establishing transversal intersections of stable/unstable manifolds of perturbed normally hyperbolic invariant manifolds (NHIMs). The method is based on a new geometric proof of the normally hyperbolic invariant manifold theorem, which establishes the existence of a NHIM, together with its associated invariant manifolds and bounds on their first and second derivatives. We do not need to know the explicit formulas for the homoclinic orbits prior to the perturbation. We also do not need to compute any integrals along such homoclinics. All needed bounds are established using rigorous computer assisted numerics. Lastly, and most importantly, the method establishes intersections for an explicit range of parameters, and not only for perturbations that are 'small enough', as is the case in the classical Melnikov approach.

  11. A non-local approach for image super-resolution using intermodality priors ☆

    PubMed Central

    Rousseau, François

    2010-01-01

    Image enhancement is of great importance in medical imaging where image resolution remains a crucial point in many image analysis algorithms. In this paper, we investigate brain hallucination (Rousseau, 2008), or generating a high-resolution brain image from an input low-resolution image, with the help of another high-resolution brain image. We propose an approach for image super-resolution by using anatomical intermodality priors from a reference image. Contrary to interpolation techniques, in order to be able to recover fine details in images, the reconstruction process is based on a physical model of image acquisition. Another contribution to this inverse problem is a new regularization approach that uses an example-based framework integrating non-local similarity constraints to handle in a better way repetitive structures and texture. The effectiveness of our approach is demonstrated by experiments on realistic Brainweb Magnetic Resonance images and on clinical images from ADNI, generating automatically high-quality brain images from low-resolution input. PMID:20580893

  12. A non-local approach for image super-resolution using intermodality priors.

    PubMed

    Rousseau, François

    2010-08-01

    Image enhancement is of great importance in medical imaging where image resolution remains a crucial point in many image analysis algorithms. In this paper, we investigate brain hallucination (Rousseau, 2008), or generating a high-resolution brain image from an input low-resolution image, with the help of another high-resolution brain image. We propose an approach for image super-resolution by using anatomical intermodality priors from a reference image. Contrary to interpolation techniques, in order to be able to recover fine details in images, the reconstruction process is based on a physical model of image acquisition. Another contribution to this inverse problem is a new regularization approach that uses an example-based framework integrating non-local similarity constraints to handle in a better way repetitive structures and texture. The effectiveness of our approach is demonstrated by experiments on realistic Brainweb Magnetic Resonance images and on clinical images from ADNI, generating automatically high-quality brain images from low-resolution input.

  13. SIR model with local and global infective contacts: A deterministic approach and applications.

    PubMed

    Maltz, Alberto; Fabricius, Gabriel

    2016-12-01

    An epidemic model with births and deaths is considered on a two-dimensional L×L lattice. Each individual can have global infective contacts according to the standard susceptible-infected-recovered (SIR) model rules or local infective contacts with their nearest neighbors. We propose a deterministic approach to this model and, for the parameters corresponding to pertussis and rubella in the prevaccine era, verify that there is a close agreement with the stochastic simulations when epidemic spread or endemic stationarity is considered. We also find that our approach captures the characteristic features of the dynamic behavior of the system after a sudden decrease in global contacts that may arise as a consequence of health care measures. By using the deterministic approach, we are able to characterize the exponential growth of the epidemic behavior and analyze the stability of the system at the stationary values. Since the deterministic approximation captures the essential features of the disease transmission dynamics of the stochastic model, it provides a useful tool for performing systematic studies as a function of the model parameters. We give an example of this potentiality by analyzing the likelihood of the endemic state to become extinct when the weight of the global contacts is drastically reduced.

  14. The contour method: a new approach in experimental mechanics

    SciTech Connect

    Prime, Michael B

    2009-01-01

    The recently developed contour method can measure complex residual-stress maps in situations where other measurement methods cannot. This talk first describes the principle of the contour method. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contour of the resulting new surface, which will not be flat if residual stresses are relaxed by the cutting, is then measured. Finally, a conceptually simple finite element analysis determines the original residual stresses from the measured contour. Next, this talk gives several examples of applications. The method is validated by comparing with neutron diffraction measurements in an indented steel disk and in a friction stir weld between dissimilar aluminum alloys. Several applications are shown that demonstrate the power of the contour method: large aluminum forgings, railroad rails, and welds. Finally, this talk discusses why the contour method is significant departure from conventional experimental mechanics. Other relaxation method, for example hole-drilling, can only measure a 1-D profile of residual stresses, and yet they require a complicated inverse calculation to determine the stresses from the strain data. The contour method gives a 2-D stress map over a full cross-section, yet a direct calculation is all that is needed to reduce the data. The reason for these advantages lies in a subtle but fundamental departure from conventional experimental mechanics. Applying new technology to old methods like will not give similar advances, but the new approach also introduces new errors.

  15. Low-frequency broadband sound source localization using an adaptive normal mode back-propagation approach in a shallow-water ocean.

    PubMed

    Lin, Ying-Tsong; Newhall, Arthur E; Lynch, James F

    2012-02-01

    A variety of localization methods with normal mode theory have been established for localizing low frequency (below a few hundred Hz), broadband signals in a shallow water environment. Gauss-Markov inverse theory is employed in this paper to derive an adaptive normal mode back-propagation approach. Joining with the maximum a posteriori mode filter, this approach is capable of separating signals from noisy data so that the back-propagation will not have significant influence from the noise. Numerical simulations are presented to demonstrate the robustness and accuracy of the approach, along with comparisons to other methods. Applications to real data collected at the edge of the continental shelf off New Jersey, USA are presented, and the effects of water column fluctuations caused by nonlinear internal waves and shelfbreak front variability are discussed.

  16. Robust extraction of local structures by the minimum beta-divergence method.

    PubMed

    Mollah, Md Nurul Haque; Sultana, Nayeema; Minami, Mihoko; Eguchi, Shinto

    2010-03-01

    This paper discusses a new highly robust learning algorithm for exploring local principal component analysis (PCA) structures in which an observed data follow one of several heterogeneous PCA models. The proposed method is formulated by minimizing beta-divergence. It searches a local PCA structure based on an initial location of the shifting parameter and a value for the tuning parameter beta. If the initial choice of the shifting parameter belongs to a data cluster, then the proposed method detects the local PCA structure of that data cluster, ignoring data in other clusters as outliers. We discuss the selection procedures for the tuning parameter beta and the initial value of the shifting parameter mu in this article. We demonstrate the performance of the proposed method by simulation. Finally, we compare the proposed method with a method based on a finite mixture model.

  17. The Rise and Attenuation of the Basic Education Programme (BEP) in Botswana: A Global-Local Dialectic Approach

    ERIC Educational Resources Information Center

    Tabulawa, Richard

    2011-01-01

    Using a global-local dialectic approach, this paper traces the rise of the basic education programme in the 1980s and 1990s in Botswana and its subsequent attenuation in the 2000s. Amongst the local forces that led to the rise of BEP were Botswana's political project of nation-building; the country's dire human resources situation in the decades…

  18. The EDIC Method: An Engaging and Comprehensive Approach for Creating Health Department Workforce Development Plans.

    PubMed

    Grimm, Brandon L; Brandert, Kathleen; Palm, David; Svoboda, Colleen

    2016-09-29

    In 2013, the Nebraska Department of Health & Human Services, Division of Public Health (Nebraska's State Health Department); and the University of Nebraska Medical Center, College of Public Health developed a comprehensive approach to assess workforce training needs. This article outlines the method used to assess the education and training needs of Division staff, and develop comprehensive workforce development plans to address those needs. The EDIC method (Engage, Develop, Identify, and Create) includes the following four phases: (1) Engage Stakeholders, (2) Develop Assessment, (3) Identify Training Needs, and (4) Create Development Plans. The EDIC method provided a process grounded in science and practice, allowed input, and produced buy-in from staff at all levels throughout the Division of Public Health. This type of process provides greater assurance that the most important gaps in skills and competencies will be identified. Although it is a comprehensive approach, it can be replicated at the state or local level across the country.

  19. A finite-volume Eulerian-Lagrangian localized adjoint method for solution of the advection-dispersion equation

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1993-01-01

    Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods for solute transport problems that are dominated by advection. FVELLAM systematically conserves mass globally with all types of boundary conditions. Integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking of characteristic lines intersecting inflow boundaries. FVELLAM extends previous results by obtaining mass conservation locally on Lagrangian space-time elements. -from Authors

  20. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  1. Selection of construction methods: a knowledge-based approach.

    PubMed

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  2. Local motion compensation in image sequences degraded by atmospheric turbulence: a comparative analysis of optical flow vs. block matching methods

    NASA Astrophysics Data System (ADS)

    Huebner, Claudia S.

    2016-10-01

    As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).

  3. The Local Discontinuous Galerkin Method for Time-Dependent Convection-Diffusion Systems

    NASA Technical Reports Server (NTRS)

    Cockburn, Bernardo; Shu, Chi-Wang

    1997-01-01

    In this paper, we study the Local Discontinuous Galerkin methods for nonlinear, time-dependent convection-diffusion systems. These methods are an extension of the Runge-Kutta Discontinuous Galerkin methods for purely hyperbolic systems to convection-diffusion systems and share with those methods their high parallelizability, their high-order formal accuracy, and their easy handling of complicated geometries, for convection dominated problems. It is proven that for scalar equations, the Local Discontinuous Galerkin methods are L(sup 2)-stable in the nonlinear case. Moreover, in the linear case, it is shown that if polynomials of degree k are used, the methods are k-th order accurate for general triangulations; although this order of convergence is suboptimal, it is sharp for the LDG methods. Preliminary numerical examples displaying the performance of the method are shown.

  4. A Hidden Markov Model Approach for Simultaneously Estimating Local Ancestry and Admixture Time Using Next Generation Sequence Data in Samples of Arbitrary Ploidy

    PubMed Central

    Nielsen, Rasmus

    2017-01-01

    Admixture—the mixing of genomes from divergent populations—is increasingly appreciated as a central process in evolution. To characterize and quantify patterns of admixture across the genome, a number of methods have been developed for local ancestry inference. However, existing approaches have a number of shortcomings. First, all local ancestry inference methods require some prior assumption about the expected ancestry tract lengths. Second, existing methods generally require genotypes, which is not feasible to obtain for many next-generation sequencing projects. Third, many methods assume samples are diploid, however a wide variety of sequencing applications will fail to meet this assumption. To address these issues, we introduce a novel hidden Markov model for estimating local ancestry that models the read pileup data, rather than genotypes, is generalized to arbitrary ploidy, and can estimate the time since admixture during local ancestry inference. We demonstrate that our method can simultaneously estimate the time since admixture and local ancestry with good accuracy, and that it performs well on samples of high ploidy—i.e. 100 or more chromosomes. As this method is very general, we expect it will be useful for local ancestry inference in a wider variety of populations than what previously has been possible. We then applied our method to pooled sequencing data derived from populations of Drosophila melanogaster on an ancestry cline on the east coast of North America. We find that regions of local recombination rates are negatively correlated with the proportion of African ancestry, suggesting that selection against foreign ancestry is the least efficient in low recombination regions. Finally we show that clinal outlier loci are enriched for genes associated with gene regulatory functions, consistent with a role of regulatory evolution in ecological adaptation of admixed D. melanogaster populations. Our results illustrate the potential of local ancestry

  5. Methods for Sight Word Recognition in Kindergarten: Traditional Flashcard Method vs. Multisensory Approach

    ERIC Educational Resources Information Center

    Phillips, William E.; Feng, Jay

    2012-01-01

    A quasi-experimental action research with a pretest-posttest same subject design was implemented to determine if there is a different effect of the flash card method and the multisensory approach on kindergarteners' achievement in sight word recognition, and which method is more effective if there is any difference. Instrumentation for pretest and…

  6. 3-D Localization of Virtual Sound Sources: Effects of Visual Environment, Pointing Method, and Training

    PubMed Central

    Majdak, Piotr; Goupell, Matthew J.; Laback, Bernhard

    2010-01-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE) (darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In experiment 2, subjects were provided sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies. PMID:20139459

  7. Statistics of Poincaré recurrences in local and global approaches

    NASA Astrophysics Data System (ADS)

    Anishchenko, Vadim S.; Astakhov, Sergey V.; Boev, Yaroslav I.; Biryukova, Nadezhda I.; Strelkova, Galina I.

    2013-12-01

    The basic statistical characteristics of the Poincaré recurrence sequence are obtained numerically for the logistic map in the chaotic regime. The mean values, variance and recurrence distribution density are calculated and their dependence on the return region size is analyzed. It is verified that the Afraimovich-Pesin dimension may be evaluated by the Kolmogorov-Sinai entropy. The peculiarities of the influence of noise on the recurrence statistics are studied in local and global approaches. It is shown that the obtained numerical data are in complete agreement with the theoretical results. It is demonstrated that the Poincaré recurrence theory can be applied to diagnose effects of stochastic resonance and chaos synchronization and to calculate the fractal dimension of a chaotic attractor.

  8. Integrative Approaches for the Identification and Localization of Specialized Metabolites in Tripterygium Roots1[OPEN

    PubMed Central

    Fischedick, Justin T.; Lange, Malte F.; Poirier, Brenton C.

    2017-01-01

    Members of the genus Tripterygium are known to contain an astonishing diversity of specialized metabolites. The lack of authentic standards has been an impediment to the rapid identification of such metabolites in extracts. We employed an approach that involves the searching of multiple, complementary chromatographic and spectroscopic data sets against the Spektraris database to speed up the metabolite identification process. Mass spectrometry-based imaging indicated a differential localization of triterpenoids to the periderm and sesquiterpene alkaloids to the cortex layer of Tripterygium roots. We further provide evidence that triterpenoids are accumulated to high levels in cells that contain suberized cell walls, which might indicate a mechanism for storage. To our knowledge, our data provide first insights into the cell type specificity of metabolite accumulation in Tripterygium and set the stage for furthering our understanding of the biological implications of specialized metabolites in this genus. PMID:27864443

  9. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  10. Local adaptive approach toward segmentation of microscopic images of activated sludge flocs

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Lo, Po Kim; Yap, Vooi Voon

    2015-11-01

    Activated sludge process is a widely used method to treat domestic and industrial effluents. The conditions of activated sludge wastewater treatment plant (AS-WWTP) are related to the morphological properties of flocs (microbial aggregates) and filaments, and are required to be monitored for normal operation of the plant. Image processing and analysis is a potential time-efficient monitoring tool for AS-WWTPs. Local adaptive segmentation algorithms are proposed for bright-field microscopic images of activated sludge flocs. Two basic modules are suggested for Otsu thresholding-based local adaptive algorithms with irregular illumination compensation. The performance of the algorithms has been compared with state-of-the-art local adaptive algorithms of Sauvola, Bradley, Feng, and c-mean. The comparisons are done using a number of region- and nonregion-based metrics at different microscopic magnifications and quantification of flocs. The performance metrics show that the proposed algorithms performed better and, in some cases, were comparable to the state-of the-art algorithms. The performance metrics were also assessed subjectively for their suitability for segmentations of activated sludge images. The region-based metrics such as false negative ratio, sensitivity, and negative predictive value gave inconsistent results as compared to other segmentation assessment metrics.

  11. An Approach to Estimate the Localized Effects of an Aircraft Crash on a Facility

    SciTech Connect

    Kimura, C; Sanzo, D; Sharirli, M

    2004-04-19

    Aircraft crashes are an element of external events required to be analyzed and documented in facility Safety Analysis Reports (SARs) and Nuclear Explosive Safety Studies (NESSs). This paper discusses the localized effects of an aircraft crash impact into the Device Assembly Facility (DAF) located at the Nevada Test Site (NTS), given that the aircraft hits the facility. This was done to gain insight into the robustness of the DAF and to account for the special features of the DAF that enhance its ability to absorb the effects of an aircraft crash. For the purpose of this paper, localized effects are considered to be only perforation or scabbing of the facility. This paper presents an extension to the aircraft crash risk methodology of Department of Energy (DOE) Standard 3014. This extension applies to facilities that may find it necessary or desirable to estimate the localized effects of an aircraft crash hit on a facility of nonuniform construction or one that is shielded in certain directions by surrounding terrain or buildings. This extension is not proposed as a replacement to the aircraft crash risk methodology of DOE Standard 3014 but rather as an alternate method to cover situations that were not considered.

  12. Multiple Dipole Sources Localization from the Scalp EEG Using a High-resolution Subspace Approach.

    PubMed

    Ding, Lei; He, Bin

    2005-01-01

    We have developed a new algorithm, FINE, to enhance the spatial resolution and localization accuracy for closely-spaced sources, in the framework of the subspace source localization. Computer simulations were conducted in the present study to evaluate the performance of FINE, as compared with classic subspace source localization algorithms, i.e. MUSIC and RAP-MUSIC, in a realistic geometry head model by means of boundary element method (BEM). The results show that FINE could distinguish superficial simulated sources, with distance as low as 8.5 mm and deep simulated sources, with distance as low as 16.3 mm. Our results also show that the accuracy of source orientation estimates from FINE is better than MUSIC and RAP-MUSIC for closely-spaced sources. Motor potentials, obtained during finger movements in a human subject, were analyzed using FINE. The detailed neural activity distribution within the contralateral premotor areas and supplemental motor areas (SMA) is revealed by FINE as compared with MUSIC. The present study suggests that FINE has excellent spatial resolution in imaging neural sources.

  13. Rapid OTAN method for localizing unsaturated lipids in lung tissue sections.

    PubMed

    Negi, D S; Stephens, R J

    1981-05-01

    The OTAN treatment, which is the only histochemical method available at present for the simultaneous localization of hydrophobic and hydrophilic unsaturated lipids in tissue sections, requires unduly long exposure to OsO4 and use of free-floating sections, which makes handling the sections difficult and often results in their loss or damage. Simple modifications using OsO4 treatment at 37 C and slide-mounted sections eliminate the practical drawbacks of the existing method and provide as good or better localization in less than one-eight of the time. The modified method is applicable to fixed as well as fresh frozen tissues.

  14. Esthesioneuroblastoma: Good Local Control of Disease by Endoscopic and Endoscope Assisted Approach. Is it Possible?

    PubMed

    Mohindra, Satyawati; Dhingra, Shruti; Mohindra, Sandeep; Kumar, Narendra; Gupta, Bhumika

    2014-09-01

    To present a short report on nine patients of esthesioneuroblastoma, managed endoscopically or endoscope assisted. To describe the technique and discuss the results at an average of 36.7 months of follow up. A retrospective study in a tertiary care centre. The present communication describes a series of 9 cases harbouring esthesioneuroblastoma, 6 managed endoscopically and 3 endoscope assisted between January 2005 and December 2009. All the nine patients remained free of disease at the primary site by endoscopic and radiological evaluation on an average of 36.7 months of follow up. One of the patients developed cutaneous and systemic metastasis for which she received chemotherapy and another one died during post-operative period due to unrelated causes. None of the patients showed recurrence or residual disease locally. Endoscopic and endoscope assisted approach provides a cosmetically better and surgically comparable outcome for local control of disease in early stages of esthesioneuroblastoma in expert hands without significant complications.

  15. A comparison of FEM-based inversion algorithms, Local frequency estimation and direct inversion approach used in MR elastography.

    PubMed

    Honarvar, Mohammad; Sahebjavaher, Ramin; Rohling, Robert; Salcudean, Septimiu

    2017-03-22

    In quantitative elastography, maps of the mechanical properties of soft tissue, or elastograms, are calculated from the measured displacement data by solving an inverse problem. The model assumptions have a significant effect on elastograms. Motivated by the high sensitivity of imaging results to the model assumptions for in-vivo Magnetic Resonance Elastography (MRE) of the prostate, we compared elastograms obtained with four different methods. Two FEM-based methods developed by our group were compared with two other commonly used methods, Local Frequency Estimator (LFE) and curl-based Direct Inversion (c-DI). All the methods assume a linear isotropic elastic model, but the methods vary in their assumptions, such as local homogeneity or incompressibility, and in the specific approach used. We report results using simulations, phantom, ex-vivo and in-vivo data. The simulation and phantom studies show, for regions with an inclusion, the contrast to noise ratio (CNR) for the FEM methods is about 3-5 times higher than the CNR for the LFE and c-DI and the RMS error is about half. The LFE method produces very smooth results (i.e. low CNR) and is fast. c-DI is faster than the FEM methods but it's only accurate in areas where elasticity variations are small. The artifacts resulting from the homogeneity assumption in c-DI is detrimental in regions with large variations. The ex-vivo and in-vivo results also show similar trends as the simulation and phantom studies. The c-FEM method is more sensitive to noise compared to the mixed-FEM due to higher orders derivatives. This is especially evident at lower frequencies where the wave curvature is smaller and it is more prone to such error, causing a discrepancy in the absolute values between the mixed-FEM and c-FEM in our in-vivo results. In general, the proposed finite element methods use fewer simplifying assumptions and outperform the other methods but they are computationally more expensive.

  16. A local fuzzy method based on “p-strong” community for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Yi, Shen; Gang, Ren; Yang, Liu; Jia-Li, Xu

    2016-06-01

    In this paper, we propose a local fuzzy method based on the idea of “p-strong” community to detect the disjoint and overlapping communities in networks. In the method, a refined agglomeration rule is designed for agglomerating nodes into local communities, and the overlapping nodes are detected based on the idea of making each community strong. We propose a contribution coefficient to measure the contribution of an overlapping node to each of its belonging communities, and the fuzzy coefficients of the overlapping node can be obtained by normalizing the to all its belonging communities. The running time of our method is analyzed and varies linearly with network size. We investigate our method on the computer-generated networks and real networks. The testing results indicate that the accuracy of our method in detecting disjoint communities is higher than those of the existing local methods and our method is efficient for detecting the overlapping nodes with fuzzy coefficients. Furthermore, the local optimizing scheme used in our method allows us to partly solve the resolution problem of the global modularity. Project supported by the National Natural Science Foundation of China (Grant Nos. 51278101 and 51578149), the Science and Technology Program of Ministry of Transport of China (Grant No. 2015318J33080), the Jiangsu Provincial Post-doctoral Science Foundation, China (Grant No. 1501046B), and the Fundamental Research Funds for the Central Universities, China (Grant No. Y0201500219).

  17. Remotely actuated localized pressure and heat apparatus and method of use

    NASA Technical Reports Server (NTRS)

    Merret, John B. (Inventor); Taylor, DeVor R. (Inventor); Wheeler, Mark M. (Inventor); Gale, Dan R. (Inventor)

    2004-01-01

    Apparatus and method for the use of a remotely actuated localized pressure and heat apparatus for the consolidation and curing of fiber elements in, structures. The apparatus includes members for clamping the desired portion of the fiber elements to be joined, pressure members and/or heat members. The method is directed to the application and use of the apparatus.

  18. A Local Discontinuous Galerkin Method for the Complex Modified KdV Equation

    SciTech Connect

    Li Wenting; Jiang Kun

    2010-09-30

    In this paper, we develop a local discontinuous Galerkin(LDG) method for solving complex modified KdV(CMKdV) equation. The LDG method has the flexibility for arbitrary h and p adaptivity. We prove the L{sup 2} stability for general solutions.

  19. Path-integral Monte Carlo method for the local Z2 Berry phase.

    PubMed

    Motoyama, Yuichi; Todo, Synge

    2013-02-01

    We present a loop cluster algorithm Monte Carlo method for calculating the local Z(2) Berry phase of the quantum spin models. The Berry connection, which is given as the inner product of two ground states with different local twist angles, is expressed as a Monte Carlo average on the worldlines with fixed spin configurations at the imaginary-time boundaries. The "complex weight problem" caused by the local twist is solved by adopting the meron cluster algorithm. We present the results of simulation on the antiferromagnetic Heisenberg model on an out-of-phase bond-alternating ladder to demonstrate that our method successfully detects the change in the valence bond pattern at the quantum phase transition point. We also propose that the gauge-fixed local Berry connection can be an effective tool to estimate precisely the quantum critical point.

  20. Identification of inelastic parameters based on deep drawing forming operations using a global-local hybrid Particle Swarm approach

    NASA Astrophysics Data System (ADS)

    Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.

    2016-04-01

    Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.

  1. The Emory method: a modified approach to Norplant implants removal.

    PubMed

    Sarma, S P; Hatcher, R

    1994-06-01

    Norplant implants were removed from fifty (50) patients using a modified approach to Norplant implant removal (Emory Method). Eighty-eight percent (88%) of the removals were accomplished in less than 10 minutes using this technique. The average time for removal of Norplant implants from 50 women included in the current study was 8 minutes. The Emory Method for Norplant implant removal includes three steps which are different from the technique developed by the Population Council. More anesthesia, a slightly longer incision and vigorous disruption of the tissue encapsulation surrounding the implants are recommended. The Emory Method is fast, safe and easy to perform. It has been successfully taught to over twenty-five clinicians.

  2. The ice-saline-Xylocaine technique. A simple method for minimizing pain in obtaining local anesthesia.

    PubMed

    Swinehart, J M

    1992-01-01

    Prior to skin surgery, localized cryoanesthesia is initially obtained utilizing Cryogel packs before local anesthesia injection, minimizing or abolishing pain from the piercing of the skin by the injection needle. The surgical field is then infiltrated with benzyl alcohol-containing normal saline, a painless solution producing moderate local anesthesia. Subsequently, a stronger anesthetic containing a vasoconstrictor or other desired additives can be infiltrated without significant patient discomfort. This simple three-step method has resulted in excellent patient acceptance, and is potentially useful for a wide range of surgical procedures and medical specialties.

  3. Correlation imaging method based on local wavenumber for interpreting magnetic data

    NASA Astrophysics Data System (ADS)

    Ma, Guoqing; Liu, Cai; Xu, Jiashu; Meng, Qingfa

    2017-03-01

    Depth estimation is a general task in the interpretation of magnetic data, and local wavenumber is an effective tool to accomplish this task, but this method requires the structural index of causative source when applies it to compute the depth of the source, which is hard to obtain for an unknown area. In this paper, we suggested a correlation imaging method to interpret magnetic data, which uses the correlation coefficient of local wavenumber of real magnetic data and transformative local wavenumber of synthetic magnetic data generated by assumed source to estimate the location of the source, and this method does not require any priori information of the source and does not require solving any matrix. The computation steps as follows, first, we assume that the causative sources are distributed regularly as a rectangular grid, and then separately compute the correlation coefficient of the local wavenumber of real data and local wavenumber of the anomaly generated by each assumed source, and the correlation coefficient gets maximum when the location parameters of the assumed source are in accord with the true locations of real sources. The synthetic tested results show that this method can obtain the location of magnetic source effectively and correctly, and is insensitive to magnetization direction and noise. This method is also applied to measured magnetic data, and get the location parameters of the source.

  4. Laser-optical method of visualization the local net of tissue blood vessels and its biomedical applications

    NASA Astrophysics Data System (ADS)

    Asimov, M. M.; Asimov, R. M.; Rubinov, A. N.

    2007-06-01

    New approach in laser-optical diagnostic methods of cell metabolism based on visualization the local net of tissue blood vessels is proposed. Optical model of laser - tissue interaction and algorithm of mathematical calculation of optical signals is developed. Novel technology of local tissue hypoxia elimination based on laser-induced photodissosiation of oxyhemoglobin in cutaneous blood vessels is developed. Method of determination of oxygen diffusion coefficient into tissue on the base of kinetics of tissue oxygenation TcPO II under the laser irradiation is proposed. The results of mathematical modeling the kinetic of oxygen distribution into tissue from arterial blood are presented. The possibility of calculation and determination of the level of TcPO II in zones with the disturbed blood microcirculation is demonstrated. The increase of the value of oxygen release rate more than for times under the irradiation by laser light is obtained. It is shown that the efficiency of laser-induced oxygenation by means of increasing oxygen concentration in blood plasma is comparable with the method of hyperbaric oxygenation (HBO) at the same time gaining advantages in local action. Different biomedical applications of developing method are discussed.

  5. A modeling approach of the influence of local hydrodynamic conditions on larval dispersal at hydrothermal vents.

    PubMed

    Bailly-Bechet, Marc; Kerszberg, Michel; Gaill, Françoise; Pradillon, Florence

    2008-12-07

    Deep-sea hydrothermal vent animal communities along oceanic ridges are both patchy and transient. Larval dispersal is a key factor in understanding how these communities function and are maintained over generations. To date, numerical approaches simulating larval dispersal considered the effect of oceanic currents on larval transportation over hundreds of kilometers but very seldom looked at the effect of local conditions within meters around chimneys. However, small scale significant variations in the hydrodynamics may influence larval fate in its early stages after release, and hence have a knock-on effect on both dispersal and colonization processes. Here we present a new numerical approach to the study of larval dispersal, considering small scales within the range of the biological communities, called "bio-hydrodynamical" scale, and ranging from a few centimeters to a few meters around hydrothermal sources. We use a physical model for the vent based on jet theory and compute the turbulent velocity field around the smoker. Larvae are considered as passive particles whose trajectories are affected by hydrodynamics, topography of the vent chimney and larval biological properties. Our model predicts that bottom currents often dominate all other factors either by entraining all larvae away from the vent or enforcing strong colonization rates. When bottom currents are very slow (<1 mms(-1)), general larvae motion is upwards due to entrainment by the main smoker jet. In this context, smokers with vertical slopes favor retention of larvae because larval initial trajectory is nearly parallel to the smoker wall, which increases the chances to settle. This retention phenomenon is intensified with increasing velocity of the main smoker jet because entrainment in the high velocity plume is preceded by a phase when larvae are attracted towards the smoker wall, which occurs earlier with higher velocity of the main jet. Finally, the buoyancy rate of the larvae, measured to be

  6. Bounded Self-Weights Estimation Method for Non-Local Means Image Denoising Using Minimax Estimators.

    PubMed

    Nguyen, Minh Phuong; Chun, Se Young

    2017-04-01

    A non-local means (NLM) filter is a weighted average of a large number of non-local pixels with various image intensity values. The NLM filters have been shown to have powerful denoising performance, excellent detail preservation by averaging many noisy pixels, and using appropriate values for the weights, respectively. The NLM weights between two different pixels are determined based on the similarities between two patches that surround these pixels and a smoothing parameter. Another important factor that influences the denoising performance is the self-weight values for the same pixel. The recently introduced local James-Stein type center pixel weight estimation method (LJS) outperforms other existing methods when determining the contribution of the center pixels in the NLM filter. However, the LJS method may result in excessively large self-weight estimates since no upper bound is assumed, and the method uses a relatively large local area for estimating the self-weights, which may lead to a strong bias. In this paper, we investigated these issues in the LJS method, and then propose a novel local self-weight estimation methods using direct bounds (LMM-DB) and reparametrization (LMM-RP) based on the Baranchik's minimax estimator. Both the LMM-DB and LMM-RP methods were evaluated using a wide range of natural images and a clinical MRI image together with the various levels of additive Gaussian noise. Our proposed parameter selection methods yielded an improved bias-variance trade-off, a higher peak signal-to-noise (PSNR) ratio, and fewer visual artifacts when compared with the results of the classical NLM and LJS methods. Our proposed methods also provide a heuristic way to select a suitable global smoothing parameters that can yield PSNR values that are close to the optimal values.

  7. Improving Signal-to-Noise Ratio in Susceptibility Weighted Imaging: A Novel Multicomponent Non-Local Approach

    PubMed Central

    Borrelli, Pasquale; Palma, Giuseppe; Tedeschi, Enrico; Cocozza, Sirio; Comerci, Marco; Alfano, Bruno; Haacke, E. Mark; Salvatore, Marco

    2015-01-01

    In susceptibility-weighted imaging (SWI), the high resolution required to obtain a proper contrast generation leads to a reduced signal-to-noise ratio (SNR). The application of a denoising filter to produce images with higher SNR and still preserve small structures from excessive blurring is therefore extremely desirable. However, as the distributions of magnitude and phase noise may introduce biases during image restoration, the application of a denoising filter is non-trivial. Taking advantage of the potential multispectral nature of MR images, a multicomponent approach using a Non-Local Means (MNLM) denoising filter may perform better than a component-by-component image restoration method. Here we present a new MNLM-based method (Multicomponent-Imaginary-Real-SWI, hereafter MIR-SWI) to produce SWI images with high SNR and improved conspicuity. Both qualitative and quantitative comparisons of MIR-SWI with the original SWI scheme and previously proposed SWI restoring pipelines showed that MIR-SWI fared consistently better than the other approaches. Noise removal with MIR-SWI also provided improvement in contrast-to-noise ratio (CNR) and vessel conspicuity at higher factors of phase mask multiplications than the one suggested in the literature for SWI vessel imaging. We conclude that a proper handling of noise in the complex MR dataset may lead to improved image quality for SWI data. PMID:26030293

  8. Classical convergence versus Zipf rank approach: Evidence from China's local-level data

    NASA Astrophysics Data System (ADS)

    Tang, Pan; Zhang, Ying; Baaquie, Belal E.; Podobnik, Boris

    2016-02-01

    This paper applies Zipf rank approach to measure how long it will take for the individual economy to reach the final state of equilibrium by using local-level data of China's urban areas. The indicators, the gross domestic product (GDP) per capita and the market capitalization (MCAP) per capita of 150 major cities in China are used for analyzing their convergence. Besides, the power law relationship is examined for GDP and MCAP. Our findings show that, compared to the classical approaches: β-convergence and σ-convergence, the Zipf ranking predicts that, in approximately 16 years, all the major cities in China will reach comparable values of GDP per capita. However, the MCAP per capita tends to follow the periodic fluctuation of the economic cycle, while the mean-log derivation (MLD) confirms the results of our study. Moreover, GDP per capita and MCAP per capita follow a power law with an average value of α = 0.41 which is higher than α = 0.38 obtained based on a large number of countries around the world.

  9. Weak localization and the approach to metal-insulator transition in single crystalline germanium nanowires.

    PubMed

    Sett, Shaili; Das, K; Raychaudhuri, A K

    2017-03-22

    We study the low-temperature electronic transport properties of single germanium nanowires (NWs) with diameters down to 45 nm to investigate the weak localization (WL) behavior and approach to metal-insulator transition (MIT) within them. The NWs (single crystalline) we investigate lie on the metallic side of the MIT with an extrapolated zero temperature conductivity [Formula: see text] in the range 23 to 1790 [Formula: see text] cm)(-1) and show a temperature-dependent conductivity which below 30 K can be described by a 3D WL behavior with Thouless length [Formula: see text] and [Formula: see text]. From the observed value of [Formula: see text] and the value of the critical carrier concentration n c, it is observed that the approach to MIT can be described by the scaling equation [Formula: see text] with [Formula: see text], which is a value expected for an uncompensated system. The investigation establishes a NW size limit for the applicability of 3D scaling theories.

  10. Weak localization and the approach to metal–insulator transition in single crystalline germanium nanowires

    NASA Astrophysics Data System (ADS)

    Sett, Shaili; Das, K.; Raychaudhuri, A. K.

    2017-03-01

    We study the low-temperature electronic transport properties of single germanium nanowires (NWs) with diameters down to 45 nm to investigate the weak localization (WL) behavior and approach to metal–insulator transition (MIT) within them. The NWs (single crystalline) we investigate lie on the metallic side of the MIT with an extrapolated zero temperature conductivity {σ0} in the range 23 to 1790 (Ω cm)‑1 and show a temperature-dependent conductivity which below 30 K can be described by a 3D WL behavior with Thouless length {{L}\\text{Th}}∼ {{T}-\\frac{p{2}}} and p∼ 4 . From the observed value of {σ0} and the value of the critical carrier concentration n c, it is observed that the approach to MIT can be described by the scaling equation {σ0}∼ {{≤ft(n-{{n}\\text{c}}\\right)}ν} with ν ≈ 0.6 , which is a value expected for an uncompensated system. The investigation establishes a NW size limit for the applicability of 3D scaling theories.

  11. Interaction-induced local moments in parallel quantum dots within the functional renormalization group approach

    NASA Astrophysics Data System (ADS)

    Protsenko, V. S.; Katanin, A. A.

    2016-11-01

    We propose a version of the functional renormalization-group (fRG) approach, which is, due to including Litim-type cutoff and switching off (or reducing) the magnetic field during fRG flow, capable of describing a singular Fermi-liquid (SFL) phase, formed due to the presence of local moments in quantum dot structures. The proposed scheme allows one to describe the first-order quantum phase transition from the "singular" to the "regular" paramagnetic phase with applied gate voltage to parallel quantum dots, symmetrically coupled to leads, and shows sizable spin splitting of electronic states in the SFL phase in the limit of vanishing magnetic field H →0 ; the calculated conductance shows good agreement with the results of the numerical renormalization group. Using the proposed fRG approach with the counterterm, we also show that for asymmetric coupling of the leads to the dots the SFL behavior similar to that for the symmetric case persists, but with occupation numbers, effective energy levels, and conductance changing continuously through the quantum phase transition into the SFL phase.

  12. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    SciTech Connect

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-07-15

    Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not

  13. Local Discontinuous Galerkin Methods for Partial Differential Equations with Higher Order Derivatives

    NASA Technical Reports Server (NTRS)

    Yan, Jue; Shu, Chi-Wang; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    In this paper we review the existing and develop new continuous Galerkin methods for solving time dependent partial differential equations with higher order derivatives in one and multiple space dimensions. We review local discontinuous Galerkin methods for convection diffusion equations involving second derivatives and for KdV type equations involving third derivatives. We then develop new local discontinuous Galerkin methods for the time dependent bi-harmonic type equations involving fourth derivatives, and partial differential equations involving fifth derivatives. For these new methods we present correct interface numerical fluxes and prove L(exp 2) stability for general nonlinear problems. Preliminary numerical examples are shown to illustrate these methods. Finally, we present new results on a post-processing technique, originally designed for methods with good negative-order error estimates, on the local discontinuous Galerkin methods applied to equations with higher derivatives. Numerical experiments show that this technique works as well for the new higher derivative cases, in effectively doubling the rate of convergence with negligible additional computational cost, for linear as well as some nonlinear problems, with a local uniform mesh.

  14. Detecting sincerity of effort: a summary of methods and approaches.

    PubMed

    Lechner, D E; Bradbury, S F; Bradley, L A

    1998-08-01

    Despite the widespread use of methods that are supposed to detect the sincerity of patients' efforts in clinical assessment, little has been written summarizing the literature that addresses the reliability and validity of measurements obtained with these methods. The purpose of this article is to review the literature on the reliability and validity of scores for Waddell's nonorganic signs, descriptions of pain behavior and symptom magnification, coefficients of variation, correlations between musculoskeletal evaluation and function, grip measurements, and the relationship between heart rate and pain intensity. The authors of the articles reviewed conclude that none of these methods have been examined adequately. Some of these methods, such as Waddell's nonorganic signs, were not developed for the purpose of detecting sincerity of effort. Clinicians are encouraged to critically read the literature addressing these methods. With further research, some of the discussed methods may prove useful. Until such research is reported in the peer-reviewed literature, however, clinicians should avoid basing evaluation of sincerity of effort on these tests. Therapists are encouraged, instead, to use a biobehavioral approach to better understand and address the complex factors underlying delayed recovery.

  15. Extracting structural information from the OH and CH stretch spectral regions with a local mode approach

    NASA Astrophysics Data System (ADS)

    Tabor, Daniel P.

    This thesis focuses on the development and application of a reduced-dimensional local mode approach to the calculation of the infrared spectra of molecules and clusters. The basic properties of infrared spectra can often be understood in the context of the harmonic oscillator/linear dipole approximation. However, the spectra of the molecules and clusters of interest in this study contain additional complications due to stretch-bend Fermi resonances. The presence of these resonances makes the assignment of vibrational spectra to particular isomers or conformers much more difficult. By using a reduced-dimensional local mode approach, we are able to incorporate the important anharmonic terms in an efficient manner and accurately model the spectra with only modest additional costs. The first part of this thesis is a detailed study on the CH stretch region vibrational spectroscopy of a series of molecules with alkyl and alkoxy groups. The conclusions of this study formed the foundation for the construction of the model for the rest of the molecules in this thesis. The approach is shown to model all of the major features of short alkylbenzenes. The second part investigates the interaction of a benzene molecule with a cluster of water molecules in the gas phase. An understanding of these structures provides a framework for understanding the solvation structure of benzene in water. Using the model Hamiltonian, we are able to make definitive assignments of the structures of benzene complexed with both six and seven water molecules based on their infrared spectra in the OH stretch region. For both clusters, the assigned structures show a fundamental change in the structure of the water network, illustrating the strong impact that a benzene molecule can have on the structure of water. Finally, we employ the model to investigate the structure and spectroscopy of longer alkylbenzene chains, alkylbenzyl radicals, and water clusters solvated with other molecules. This series of

  16. Robust statistical approaches for local planar surface fitting in 3D laser scanning data

    NASA Astrophysics Data System (ADS)

    Nurunnabi, Abdul; Belton, David; West, Geoff

    2014-10-01

    This paper proposes robust methods for local planar surface fitting in 3D laser scanning data. Searching through the literature revealed that many authors frequently used Least Squares (LS) and Principal Component Analysis (PCA) for point cloud processing without any treatment of outliers. It is known that LS and PCA are sensitive to outliers and can give inconsistent and misleading estimates. RANdom SAmple Consensus (RANSAC) is one of the most well-known robust methods used for model fitting when noise and/or outliers are present. We concentrate on the recently introduced Deterministic Minimum Covariance Determinant estimator and robust PCA, and propose two variants of statistically robust algorithms for fitting planar surfaces to 3D laser scanning point cloud data. The performance of the proposed robust methods is demonstrated by qualitative and quantitative analysis through several synthetic and mobile laser scanning 3D data sets for different applications. Using simulated data, and comparisons with LS, PCA, RANSAC, variants of RANSAC and other robust statistical methods, we demonstrate that the new algorithms are significantly more efficient, faster, and produce more accurate fits and robust local statistics (e.g. surface normals), necessary for many point cloud processing tasks. Consider one example data set used consisting of 100 points with 20% outliers representing a plane. The proposed methods called DetRD-PCA and DetRPCA, produce bias angles (angle between the fitted planes with and without outliers) of 0.20° and 0.24° respectively, whereas LS, PCA and RANSAC produce worse bias angles of 52.49°, 39.55° and 0.79° respectively. In terms of speed, DetRD-PCA takes 0.033 s on average for fitting a plane, which is approximately 6.5, 25.4 and 25.8 times faster than RANSAC, and two other robust statistical methods, respectively. The estimated robust surface normals and curvatures from the new methods have been used for plane fitting, sharp feature

  17. Continuum modeling using granular micromechanics approach: Method development and applications

    NASA Astrophysics Data System (ADS)

    Poorsolhjouy, Payam

    This work presents a constitutive modeling approach for the behavior of granular materials. In the granular micromechanics approach presented here, the material point is assumed to be composed of grains interacting with their neighbors through different inter-granular mechanisms that represent material's macroscopic behavior. The present work focuses on (i) developing the method for modeling more complicated material systems as well as more complicated loading scenarios and (ii) applications of the method for modeling various granular materials and granular assemblies. A damage-plasticity model for modeling cementitious and rock-like materials is developed, calibrated, and verified in a thermo-mechanically consistent manner. Grain-pair interactions in normal tension, normal compression, and tangential directions have been defined in a manner that is consistent with the material's macroscopic behavior. The resulting model is able to predict, among other interesting issues, the effects of loading induced anisotropy. Material's response to loading will depend on the loading history of grain-pair interactions in different directions. Thus the model predicts load-path dependent failure. Due to the inadequacies of first gradient continuum theories in predicting phenomena such as shear band width, wave dispersion, and frequency band-gap, the presented method is enhanced by incorporation of non-classical terms in the kinematic analysis. A complete micromorphic theory is presented by incorporating additional terms such as fluctuations, second gradient terms, and spin fields. Relative deformation of grain-pairs is calculated based on the enhanced kinematic analysis. The resulting theory incorporates the deformation and forces in grain-pair interactions due to different kinematic measures into the macroscopic behavior. As a result, non-classical phenomena such as wave dispersion and frequency band-gaps can be predicted. Using the grain-scale analysis, a practical approach for

  18. First-Principles-Based Method for Electron Localization: Application to Monolayer Hexagonal Boron Nitride.

    PubMed

    Ekuma, C E; Dobrosavljević, V; Gunlycke, D

    2017-03-10

    We present a first-principles-based many-body typical medium dynamical cluster approximation and density function theory method for characterizing electron localization in disordered structures. This method applied to monolayer hexagonal boron nitride shows that the presence of boron vacancies could turn this wide-gap insulator into a correlated metal. Depending on the strength of the electron interactions, these calculations suggest that conduction could be obtained at a boron vacancy concentration as low as 1.0%. We also explore the distribution of the local density of states, a fingerprint of spatial variations, which allows localized and delocalized states to be distinguished. The presented method enables the study of disorder-driven insulator-metal transitions not only in h-BN but also in other physical materials.

  19. First-Principles-Based Method for Electron Localization: Application to Monolayer Hexagonal Boron Nitride

    NASA Astrophysics Data System (ADS)

    Ekuma, C. E.; Dobrosavljević, V.; Gunlycke, D.

    2017-03-01

    We present a first-principles-based many-body typical medium dynamical cluster approximation and density function theory method for characterizing electron localization in disordered structures. This method applied to monolayer hexagonal boron nitride shows that the presence of boron vacancies could turn this wide-gap insulator into a correlated metal. Depending on the strength of the electron interactions, these calculations suggest that conduction could be obtained at a boron vacancy concentration as low as 1.0%. We also explore the distribution of the local density of states, a fingerprint of spatial variations, which allows localized and delocalized states to be distinguished. The presented method enables the study of disorder-driven insulator-metal transitions not only in h -BN but also in other physical materials.

  20. A robust line matching method based on local appearance descriptor and neighboring geometric attributes

    NASA Astrophysics Data System (ADS)

    Xing, Jing; Wei, Zhenzhong; Zhang, Guangjun

    2016-10-01

    This paper reports an efficient method for line matching, which utilizes local intensity gradient information and neighboring geometric attributes. Lines are detected in a multi-scale way to make the method robust to scale changes. A descriptor based on local appearance is built to generate candidate matching pairs. The key idea is to accumulate intensity gradient information into histograms based on their intensity orders to overcome the fragmentation problem of lines. Besides, local coordinate system is built for each line to achieve rotation invariance. For each line segment in candidate matching pairs, a histogram is built by aggregating geometric attributes of neighboring line segments. The final matching measure derives from the distance between normalized geometric attributes histograms. Experiments show that the proposed method is robust to large illumination changes and is rotation invariant.

  1. A full digital approach to the TDCR method.

    PubMed

    Mini, Giuliano; Pepe, Francesco; Tintori, Carlo; Capogni, Marco

    2014-05-01

    Current state of the art solutions based on the Triple to Double Coincidence Ratio method are generally large size, heavy-weight and not transportable systems. This is due, on one side, to large detectors and scintillation chambers and, on the other, to bulky analog electronics for data acquisition. CAEN developed a new, full digital approach to TDCR technique based on a portable, stand-alone, high-speed multichannel digitizer, on-board Digital Pulse Processing and dedicated DAQ software that emulates the well-known MAC3 analog board.

  2. A new method for matched field localization based on two-hydrophone

    NASA Astrophysics Data System (ADS)

    Li, Kun; Fang, Shi-liang

    2015-03-01

    The conventional matched field processing (MFP) uses large vertical arrays to locate an underwater acoustic target. However, the use of large vertical arrays increases equipment and computational cost, and causes some problems such as element failures, and array tilting to degrade the localization performance. In this paper, the matched field localization method using two-hydrophone is proposed for underwater acoustic pulse signals with an unknown emitted signal waveform. Using the received signal of hydrophones and the ocean channel pulse response which can be calculated from an acoustic propagation model, the spectral matrix of the emitted signal for different source locations can be estimated by employing the method of frequency domain least squares. The resulting spectral matrix of the emitted signal for every grid region is then multiplied by the ocean channel frequency response matrix to generate the spectral matrix of replica signal. Finally, the matched field localization using two-hydrophone for underwater acoustic pulse signals of an unknown emitted signal waveform can be estimated by comparing the difference between the spectral matrixes of the received signal and the replica signal. The simulated results from a shallow water environment for broadband signals demonstrate the significant localization performance of the proposed method. In addition, the localization accuracy in five different cases are analyzed by the simulation trial, and the results show that the proposed method has a sharp peak and low sidelobes, overcoming the problem of high sidelobes in the conventional MFP due to lack of the number of elements.

  3. Ethics, Collaboration, and Presentation Methods for Local and Traditional Knowledge for Understanding Arctic Change

    NASA Astrophysics Data System (ADS)

    Parsons, M. A.; Gearheard, S.; McNeave, C.

    2009-12-01

    Local and traditional knowledge (LTK) provides rich information about the Arctic environment at spatial and temporal scales that scientific knowledge often does not have access to (e.g. localized observations of fine-scale ecological change potentially from many different communities, or local sea ice and conditions prior to 1950s ice charts and 1970s satellite records). Community-based observations and monitoring are an opportunity for Arctic residents to provide ‘frontline’ observations and measurements that are an early warning system for Arctic change. The Exchange for Local Observations and Knowledge of the Arctic (ELOKA) was established in response to the growing number of community-based and community-oriented research and observation projects in the Arctic. ELOKA provides data management and user support to facilitate the collection, preservation, exchange, and use of local observations and knowledge. Managing these data presents unique ethical challenges in terms of appropriate use of rare human knowledge and ensuring that knowledge is not lost from the local communities and not exploited in ways antithetical to community culture and desires. Local Arctic residents must be engaged as true collaborative partners while respecting their perspectives, which may vary substantially from a western science perspective. At the same time, we seek to derive scientific meaning from the local knowledge that can be used in conjunction with quantitative science data. This creates new challenges in terms of data presentation, knowledge representations, and basic issues of metadata. This presentation reviews these challenges, some initial approaches to addressing them, and overall lessons learned and future directions.

  4. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other

  5. A formative multi-method approach to evaluating training.

    PubMed

    Hayes, Holly; Scott, Victoria; Abraczinskas, Michelle; Scaccia, Jonathan; Stout, Soma; Wandersman, Abraham

    2016-10-01

    This article describes how we used a formative multi-method evaluation approach to gather real-time information about the processes of a complex, multi-day training with 24 community coalitions in the United States. The evaluation team used seven distinct, evaluation strategies to obtain evaluation data from the first Community Health Improvement Leadership Academy (CHILA) within a three-prong framework (inquiry, observation, and reflection). These methods included: comprehensive survey, rapid feedback form, learning wall, observational form, team debrief, social network analysis and critical moments reflection. The seven distinct methods allowed for both real time quality improvement during the CHILA and long term planning for the next CHILA. The methods also gave a comprehensive picture of the CHILA, which when synthesized allowed the evaluation team to assess the effectiveness of a training designed to tap into natural community strengths and accelerate health improvement. We hope that these formative evaluation methods can continue to be refined and used by others to evaluate training.

  6. Staffing levels in rural nursing homes: a mixed methods approach.

    PubMed

    Towsley, Gail L; Beck, Susan L; Dudley, William N; Pepper, Ginette A

    2011-07-01

    This mixed methods study used multiple regression analyses to examine the impact of organizational and market characteristics on staffing hours and staffing mix, and qualitative interview to explore the challenges and facilitators of recruiting and retaining qualified staff. Rural nursing homes (NHs) certified by Medicare or Medicaid (N = 161) were sampled from the Online Survey Certification and Reporting system. A subsample (n = 23) was selected purposively for the qualitative analysis. Smaller NHs or government-affiliated homes had more total nursing hours per resident day and more hours of care by certified nursing assistants and RNs than larger and nongovernment-affiliated homes; however, almost 87% of NHs in this study were below the national recommendation for RN hours. Informants voiced challenges related to enough staff, qualified staff, and training staff. Development of nursing resources is critical, especially in rural locales where aging resources may not be well developed.

  7. Partially Strong Transparency Conditions and a Singular Localization Method In Geometric Optics

    NASA Astrophysics Data System (ADS)

    Lu, Yong; Zhang, Zhifei

    2016-10-01

    This paper focuses on the stability analysis of WKB approximate solutions in geometric optics with the absence of strong transparency conditions under the terminology of Joly, Métivier and Rauch. We introduce a compatible condition and a singular localization method which allows us to prove the stability of WKB solutions over long time intervals. This compatible condition is weaker than the strong transparency condition. The singular localization method allows us to do delicate analysis near resonances. As an application, we show the long time approximation of Klein-Gordon equations by Schrödinger equations in the non-relativistic limit regime.

  8. Hybridization of evolutionary algorithms and local search by means of a clustering method.

    PubMed

    Martínez-Estudillo, Alfonso C; Hervás-Martínez, César; Martínez-Estudillo, Francisco J; García-Pedrajas, Nicolás

    2006-06-01

    This paper presents a hybrid evolutionary algorithm (EA) to solve nonlinear-regression problems. Although EAs have proven their ability to explore large search spaces, they are comparatively inefficient in fine tuning the solution. This drawback is usually avoided by means of local optimization algorithms that are applied to the individuals of the population. The algorithms that use local optimization procedures are usually called hybrid algorithms. On the other hand, it is well known that the clustering process enables the creation of groups (clusters) with mutually close points that hopefully correspond to relevant regions of attraction. Local-search procedures can then be started once in every such region. This paper proposes the combination of an EA, a clustering process, and a local-search procedure to the evolutionary design of product-units neural networks. In the methodology presented, only a few individuals are subject to local optimization. Moreover, the local optimization algorithm is only applied at specific stages of the evolutionary process. Our results show a favorable performance when the regression method proposed is compared to other standard methods.

  9. Segmentation of locally varying numbers of outer retinal layers by a model selection approach.

    PubMed

    Novosel, Jelena; Yzer, Suzanne; Vermeer, Koenraad; Van Vliet, Lucas

    2017-02-08

    Extraction of image-based biomarkers, such as the presence, visibility or thickness of a certain layer, from 3D optical coherence tomography data provides relevant clinical information. We present a method to simultaneously determine the number of visible layers in the outer retina and segment them. The method is based on a model selection approach with special attention given to the balance between the quality of a fit and model complexity. This will ensure that a more complex model is selected only if this is sufficiently supported by the data. The performance of the method was evaluated on healthy and retinitis pigmentosa (RP) affected eyes. Additionally, the reproducibility of automatic method and manual annotations was evaluated on healthy eyes. A good agreement between the segmentation performed manually by a medical doctor and results obtained from the automatic segmentation was found. The mean unsigned deviation for all outer retinal layers in healthy and RP affected eyes varied between 2.6 and 4.9 μm. The reproducibility of the automatic method was similar to the reproducibility of the manual segmentation. Overall, the method provides a flexible and accurate solution for determining the visibility and location of outer retinal layers and could be used as an aid for the disease diagnosis and monitoring.

  10. Limitations in the spectral method for graph partitioning: Detectability threshold and localization of eigenvectors.

    PubMed

    Kawamoto, Tatsuro; Kabashima, Yoshiyuki

    2015-06-01

    Investigating the performance of different methods is a fundamental problem in graph partitioning. In this paper, we estimate the so-called detectability threshold for the spectral method with both un-normalized and normalized Laplacians in sparse graphs. The detectability threshold is the critical point at which the result of the spectral method is completely uncorrelated to the planted partition. We also analyze whether the localization of eigenvectors affects the partitioning performance in the detectable region. We use the replica method, which is often used in the field of spin-glass theory, and focus on the case of bisection. We show that the gap between the estimated threshold for the spectral method and the threshold obtained from Bayesian inference is considerable in sparse graphs, even without eigenvector localization. This gap closes in a dense limit.

  11. Limitations in the spectral method for graph partitioning: Detectability threshold and localization of eigenvectors

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro; Kabashima, Yoshiyuki

    2015-06-01

    Investigating the performance of different methods is a fundamental problem in graph partitioning. In this paper, we estimate the so-called detectability threshold for the spectral method with both un-normalized and normalized Laplacians in sparse graphs. The detectability threshold is the critical point at which the result of the spectral method is completely uncorrelated to the planted partition. We also analyze whether the localization of eigenvectors affects the partitioning performance in the detectable region. We use the replica method, which is often used in the field of spin-glass theory, and focus on the case of bisection. We show that the gap between the estimated threshold for the spectral method and the threshold obtained from Bayesian inference is considerable in sparse graphs, even without eigenvector localization. This gap closes in a dense limit.

  12. Travel time calculation in regular 3D grid in local and regional scale using fast marching method

    NASA Astrophysics Data System (ADS)

    Polkowski, M.

    2015-12-01

    Local and regional 3D seismic velocity models of crust and sediments are very important for numerous technics like mantle and core tomography, localization of local and regional events and others. Most of those techniques require calculation of wave travel time through the 3D model. This can be achieved using multiple approaches from simple ray tracing to advanced full waveform calculation. In this study simple and efficient implementation of fast marching method is presented. This method provides more information than ray tracing and is much less complicated than methods like full waveform being the perfect compromise. Presented code is written in C++, well commented and is easy to modify for different types of studies. Additionally performance is widely discussed including possibilities of multithreading and massive parallelism like GPU. Source code will be published in 2016 as it is part of the PhD thesis. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  13. Determining localized garment insulation values from manikin studies: computational method and results.

    PubMed

    Nelson, D A; Curlee, J S; Curran, A R; Ziriax, J M; Mason, P A

    2005-12-01

    The localized thermal insulation value expresses a garment's thermal resistance over the region which is covered by the garment, rather than over the entire surface of a subject or manikin. The determination of localized garment insulation values is critical to the development of high-resolution models of sensible heat exchange. A method is presented for determining and validating localized garment insulation values, based on whole-body insulation values (clo units) and using computer-aided design and thermal analysis software. Localized insulation values are presented for a catalog consisting of 106 garments and verified using computer-generated models. The values presented are suitable for use on volume element-based or surface element-based models of heat transfer involving clothed subjects.

  14. New quantitative approaches for classifying and predicting local-scale habitats in estuaries

    NASA Astrophysics Data System (ADS)

    Valesini, Fiona J.; Hourston, Mathew; Wildsmith, Michelle D.; Coen, Natasha J.; Potter, Ian C.

    2010-03-01

    This study has developed quantitative approaches for firstly classifying local-scale nearshore habitats in an estuary and then predicting the habitat of any nearshore site in that system. Both approaches employ measurements for a suite of enduring environmental criteria that are biologically relevant and can be easily derived from readily available maps. While the approaches were developed for south-western Australian estuaries, with a focus here on the Swan and Peel-Harvey, they can easily be tailored to any system. Classification of the habitats in each of the above estuaries was achieved by subjecting to hierarchical agglomerative clustering (CLUSTER) and a Similarity Profiles test (SIMPROF), a Manhattan distance matrix constructed from measurements of a suite of enduring criteria recorded at numerous environmentally diverse sites. Groups of sites within the resultant dendogram that were shown by SIMPROF to not contain any significant internal differences, but differ significantly from all other groups in their enduring characteristics, were considered to represent habitat types. The enduring features of the 18 and 17 habitats identified among the 101 and 102 sites in the Swan and Peel-Harvey estuaries, respectively, are presented. The average measurements of the enduring characteristics at each habitat were then used in a novel application of the Linkage Tree (LINKTREE) and SIMPROF routines to produce a "decision tree" for predicting, on the basis of measurements for particular enduring variables, the habitat to which any further site in an estuary is best assigned. In both estuaries, the pattern of relative differences among habitats, as defined by their enduring characteristics, was significantly correlated with that defined by their non-enduring water physico-chemical characteristics recorded seasonally in the field. However, those correlations were substantially higher for the Swan, particularly when salinity was the only water physico-chemical variable

  15. A local space–time kriging approach applied to a national outpatient malaria data set

    PubMed Central

    Gething, P.W.; Atkinson, P.M.; Noor, A.M.; Gikandi, P.W.; Hay, S.I.; Nixon, M.S.

    2007-01-01

    Increases in the availability of reliable health data are widely recognised as essential for efforts to strengthen health-care systems in resource-poor settings worldwide. Effective health-system planning requires comprehensive and up-to-date information on a range of health metrics and this requirement is generally addressed by a Health Management Information System (HMIS) that coordinates the routine collection of data at individual health facilities and their compilation into national databases. In many resource-poor settings, these systems are inadequate and national databases often contain only a small proportion of the expected records. In this paper, we take an important health metric in Kenya (the proportion of outpatient treatments for malaria (MP)) from the national HMIS database and predict the values of MP at facilities where monthly records are missing. The available MP data were densely distributed across a spatiotemporal domain and displayed second-order heterogeneity. We used three different kriging methodologies to make cross-validation predictions of MP in order to test the effect on prediction accuracy of (a) the extension of a spatial-only to a space–time prediction approach, and (b) the replacement of a globally stationary with a locally varying random function model. Space–time kriging was found to produce predictions with 98.4% less mean bias and 14.8% smaller mean imprecision than conventional spatial-only kriging. A modification of space–time kriging that allowed space–time variograms to be recalculated for every prediction location within a spatially local neighbourhood resulted in a larger decrease in mean imprecision over ordinary kriging (18.3%) although the mean bias was reduced less (87.5%). PMID:19424510

  16. Modified patch-based locally optimal Wiener method for interferometric SAR phase filtering

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing

    2016-04-01

    This paper presents a modified patch-based locally optimal Wiener (PLOW) method for interferometric synthetic aperture radar (InSAR) phase filtering. PLOW is a linear minimum mean squared error (LMMSE) estimator based on a Gaussian additive noise condition. It jointly estimates moments, including mean and covariance, using a non-local technique. By using similarities between image patches, this method can effectively filter noise while preserving details. When applied to InSAR phase filtering, three modifications are proposed based on spatial variant noise. First, pixels are adaptively clustered according to their coherence magnitudes. Second, rather than a global estimator, a locally adaptive estimator is used to estimate noise covariance. Third, using the coherence magnitudes as weights, the mean of each cluster is estimated, using a weighted mean to further reduce noise. The performance of the proposed method is experimentally verified using simulated and real data. The results of our study demonstrate that the proposed method is on par or better than the non-local interferometric SAR (NL-InSAR) method.

  17. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2016-06-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  18. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2017-02-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  19. FALCON: A method for flexible adaptation of local coordinates of nuclei

    NASA Astrophysics Data System (ADS)

    König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H.; Christiansen, Ove

    2016-02-01

    We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin.

  20. Novel approaches to improve iris recognition system performance based on local quality evaluation and feature fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.

  1. A Public Policy Approach to Local Models of HIV/AIDS Control in Brazil

    PubMed Central

    de Assis, Andreia; Costa-Couto, Maria-Helena; Thoenig, Jean-Claude; Fleury, Sonia; de Camargo, Kenneth; Larouzé, Bernard

    2009-01-01

    Objectives. We investigated involvement and cooperation patterns of local Brazilian AIDS program actors and the consequences of these patterns for program implementation and sustainability. Methods. We performed a public policy analysis (documentary analysis, direct observation, semistructured interviews of health service and nongovernmental organization [NGO] actors) in 5 towns in 2 states, São Paulo and Pará. Results. Patterns suggested 3 models. In model 1, local government, NGOs, and primary health care services were involved in AIDS programs with satisfactory response to new epidemiological trends but a risk that HIV/AIDS would become low priority. In model 2, mainly because of NGO activism, HIV/AIDS remained an exceptional issue, with limited responses to new epidemiological trends and program sustainability undermined by political instability. In model 3, involvement of public agencies and NGOs was limited, with inadequate response to epidemiological trends and poor mobilization threatening program sustainability. Conclusions. Within a common national AIDS policy framework, the degree of involvement and cooperation between public and NGO actors deeply impacts population coverage and program sustainability. Specific processes are required to maintain actor mobilization without isolating AIDS programs. PMID:19372523

  2. An observation-based approach to identify local natural dust events from routine aerosol ground monitoring

    NASA Astrophysics Data System (ADS)

    Tong, D. Q.; Dan, M.; Wang, T.; Lee, P.

    2012-02-01

    Dust is a major component of atmospheric aerosols in many parts of the world. Although there exist many routine aerosol monitoring networks, it is often difficult to obtain dust records from these networks, because these monitors are either deployed far away from dust active regions (most likely collocated with dense population) or contaminated by anthropogenic sources and other natural sources, such as wildfires and vegetation detritus. Here we propose a new approach to identify local dust events relying solely on aerosol mass and composition from general-purpose aerosol measurements. Through analyzing the chemical and physical characteristics of aerosol observations during satellite-detected dust episodes, we select five indicators to be used to identify local dust records: (1) high PM10 concentrations; (2) low PM2.5/PM10 ratio; (3) higher concentrations and percentage of crustal elements; (4) lower percentage of anthropogenic pollutants; and (5) low enrichment factors of anthropogenic elements. After establishing these identification criteria, we conduct hierarchical cluster analysis for all validated aerosol measurement data over 68 IMPROVE sites in the Western United States. A total of 182 local dust events were identified over 30 of the 68 locations from 2000 to 2007. These locations are either close to the four US Deserts, namely the Great Basin Desert, the Mojave Desert, the Sonoran Desert, and the Chihuahuan Desert, or in the high wind power region (Colorado). During the eight-year study period, the total number of dust events displays an interesting four-year activity cycle (one in 2000-2003 and the other in 2004-2007). The years of 2003, 2002 and 2007 are the three most active dust periods, with 46, 31 and 24 recorded dust events, respectively, while the years of 2000, 2004 and 2005 are the calmest periods, all with single digit dust records. Among these deserts, the Chihuahua Desert (59 cases) and the Sonoran Desert (62 cases) are by far the most active

  3. Projection method for improving signal to noise ratio of localized surface plasmon resonance biosensors

    PubMed Central

    Abumazwed, Ahmed; Kubo, Wakana; Shen, Chen; Tanaka, Takuo; Kirk, Andrew G.

    2016-01-01

    This paper presents a simple and accurate method (the projection method) to improve the signal to noise ratio of localized surface plasmon resonance (LSPR). The nanostructures presented in the paper can be readily fabricated by nanoimprint lithography. The finite difference time domain method is used to simulate the structures and generate a reference matrix for the method. The results are validated against experimental data and the proposed method is compared against several other recently published signal processing techniques. We also apply the projection method to biotin-streptavidin binding experimental data and determine the limit of detection (LoD). The method improves the signal to noise ratio (SNR) by one order of magnitude, and hence decreases the limit of detection when compared to the direct measurement of the transmission-dip. The projection method outperforms the established methods in terms of accuracy and achieves the best combination of signal to noise ratio and limit of detection. PMID:28101430

  4. A hybrid non-reflective boundary technique for efficient simulation of guided waves using local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Cesnik, Carlos E. S.

    2016-04-01

    Local interaction simulation approach (LISA) is a highly parallelizable numerical scheme for guided wave simulation in structural health monitoring (SHM). This paper addresses the issue of simulating wave propagation in unbounded domain through the implementation of non-reflective boundary (NRB) in LISA. In this study, two different categories of NRB, i.e., the non-reflective boundary condition (NRBC) and the absorbing boundary layer (ABL), have been investigated in the parallelized LISA scheme. For the implementation of NRBC, a set of general LISA equations considering the effect from boundary stress is obtained first. As a simple example, the Lysmer and Kuhlemeyer (L-K) model is applied here to demonstrate the easiness of NRBC implementation in LISA. As a representative of ABL implementation, the LISA scheme incorporating the absorbing layers with increasing damping (ALID) is also proposed, based on elasto-dynamic equations considering damping effect. Finally, an effective hybrid model combining L-K and ALID methods in LISA is developed, and guidelines for implementing the hybrid model is presented. Case studies on a three-dimensional plate model compares the performance of hybrid method to that of L-K and ALID acting independently. The simulation results demonstrate that best absorbing efficiency is achieved with the hybrid method.

  5. Palaeoclimate estimates for the Middle Miocene Schrotzburg flora (S Germany): a multi-method approach

    NASA Astrophysics Data System (ADS)

    Uhl, Dieter; Bruch, Angela A.; Traiser, Christopher; Klotz, Stefan

    2006-11-01

    We present a detailed palaeoclimate analysis of the Middle Miocene (uppermost Badenian lowermost Sarmatian) Schrotzburg locality in S Germany, based on the fossil macro- and micro-flora, using four different methods for the estimation of palaeoclimate parameters: the coexistence approach (CA), leaf margin analysis (LMA), the Climate-Leaf Analysis Multivariate Program (CLAMP), as well as a recently developed multivariate leaf physiognomic approach based on an European calibration dataset (ELPA). Considering results of all methods used, the following palaeoclimate estimates seem to be most likely: mean annual temperature ˜15 16°C (MAT), coldest month mean temperature ˜7°C (CMMT), warmest month mean temperature between 25 and 26°C, and mean annual precipiation ˜1,300 mm, although CMMT values may have been colder as indicated by the disappearance of the crocodile Diplocynodon and the temperature thresholds derived from modern alligators. For most palaeoclimatic parameters, estimates derived by CLAMP significantly differ from those derived by most other methods. With respect to the consistency of the results obtained by CA, LMA and ELPA, it is suggested that for the Schrotzburg locality CLAMP is probably less reliable than most other methods. A possible explanation may be attributed to the correlation between leaf physiognomy and climate as represented by the CLAMP calibration data set which is largely based on extant floras from N America and E Asia and which may be not suitable for application to the European Neogene. All physiognomic methods used here were affected by taphonomic biasses. Especially the number of taxa had a great influence on the reliability of the palaeoclimate estimates. Both multivariate leaf physiognomic approaches are less influenced by such biasses than the univariate LMA. In combination with previously published results from the European and Asian Neogene, our data suggest that during the Neogene in Eurasia CLAMP may produce temperature

  6. Local moment approach as a quantum impurity solver for the Hubbard model

    NASA Astrophysics Data System (ADS)

    Barman, Himadri

    2016-07-01

    The local moment approach (LMA) has presented itself as a powerful semianalytical quantum impurity solver (QIS) in the context of the dynamical mean-field theory (DMFT) for the periodic Anderson model and it correctly captures the low-energy Kondo scale for the single impurity model, having excellent agreement with the Bethe ansatz and numerical renormalization group (NRG) results. However, the most common correlated lattice model, the Hubbard model, has not been explored well within the LMA+DMFT framework beyond the insulating phase. Here in our work, within the framework we complete the filling-interaction phase diagram of the single band Hubbard model at zero temperature. Our formalism is generic to any particle filling and can be extended to finite temperature. We contrast our results with another QIS, namely the iterated perturbation theory (IPT) and show that the second spectral moment sum rule improves better as the Hubbard interaction strength grows stronger in LMA, whereas it severely breaks down after the Mott transition in IPT. For the metallic case, the Fermi liquid (FL) scaling agreement with the NRG spectral density supports the fact that the FL scale emerges from the inherent Kondo physics of the impurity model. We also show that, in the metallic phase, the FL scaling of the spectral density leads to universality which extends to infinite frequency range at infinite correlation strength (strong coupling). At large interaction strength, the off half-filling spectral density forms a pseudogap near the Fermi level and filling-controlled Mott transition occurs as one approaches the half-filling. As a response property, we finally study the zero temperature optical conductivity and find universal features such as absorption peak position governed by the FL scale and a doping independent crossing point, often dubbed the isosbestic point in experiments.

  7. Image dipoles approach to the local field enhancement in nanostructured Ag-Au hybrid devices.

    PubMed

    David, Christin; Richter, Marten; Knorr, Andreas; Weidinger, Inez M; Hildebrandt, Peter

    2010-01-14

    We have investigated the plasmonic enhancement in the radiation field at various nanostructured multilayer devices that may be applied in surface enhanced Raman spectroscopy. We apply an image dipole method to describe the effect of surface morphology on the field enhancement in a quasistatic limit. In particular, we compare the performance of a nanostructured silver surface and a layered silver-gold hybrid device. It is found that localized surface plasmon states provide a high field enhancement in silver-gold hybrid devices, where symmetry breaking due to surface defects is a supporting factor. These results are compared to those obtained for multishell nanoparticles of spherical symmetry. Calculated enhancement factors are discussed on the background of recent experimental data.

  8. An approach to the damping of local modes of oscillations resulting from large hydraulic transients

    SciTech Connect

    Dobrijevic, D.M.; Jankovic, M.V.

    1999-09-01

    A new method of damping of local modes of oscillations under large disturbance is presented in this paper. The digital governor controller is used. Controller operates in real time to improve the generating unit transients through the guide vane position and the runner blade position. The developed digital governor controller, whose control signals are adjusted using the on-line measurements, offers better damping effects for the generator oscillations under large disturbances than the conventional controller. Digital simulations of hydroelectric power plant equipped with low-head Kaplan turbine are performed and the comparisons between the digital governor control and the conventional governor control are presented. Simulation results show that the new controller offers better performances, than the conventional controller, when the system is subjected to large disturbances.

  9. Comparison of explicitly correlated local coupled-cluster methods with various choices of virtual orbitals.

    PubMed

    Krause, Christine; Werner, Hans-Joachim

    2012-06-07

    Explicitly correlated local coupled-cluster (LCCSD-F12) methods with pair natural orbitals (PNOs), orbital specific virtual orbitals (OSVs), and projected atomic orbitals (PAOs) are compared. In all cases pair-specific virtual subspaces (domains) are used, and the convergence of the correlation energy as a function of the domain sizes is studied. Furthermore, the performance of the methods for reaction energies of 52 reactions involving 58 small and medium sized molecules is investigated. It is demonstrated that for all choices of virtual orbitals much smaller domains are needed in the explicitly correlated methods than without the explicitly correlated terms, since the latter correct a large part of the domain error, as found previously. For PNO-LCCSD-F12 with VTZ-F12 basis sets on the average only 20 PNOs per pair are needed to obtain reaction energies with a root mean square deviation of less than 1 kJ mol(-1) from complete basis set estimates. With OSVs or PAOs at least 4 times larger domains are needed for the same accuracy. A new hybrid method that combines the advantages of the OSV and PNO methods is proposed and tested. While in the current work the different local methods are only simulated using a conventional CCSD program, the implications for low-order scaling local implementations of the various methods are discussed.

  10. Spatial Localization of Sources in the Rat Subthalamic Motor Region Using an Inverse Current Source Density Method

    PubMed Central

    van Dijk, Kees J.; Janssen, Marcus L. F.; Zwartjes, Daphne G. M.; Temel, Yasin; Visser-Vandewalle, Veerle; Veltink, Peter H.; Benazzouz, Abdelhamid; Heida, Tjitske

    2016-01-01

    Objective: In this study we introduce the use of the current source density (CSD) method as a way to visualize the spatial organization of evoked responses in the rat subthalamic nucleus (STN) at fixed time stamps resulting from motor cortex stimulation. This method offers opportunities to visualize neuronal input and study the relation between the synaptic input and the neural output of neural populations. Approach: Motor cortex evoked local field potentials and unit activity were measured in the subthalamic region, with a 3D measurement grid consisting of 320 measurement points and high spatial resolution. This allowed us to visualize the evoked synaptic input by estimating the current source density (CSD) from the measured local field potentials, using the inverse CSD method. At the same time, the neuronal output of the cells within the grid is assessed by calculating post stimulus time histograms. Main results: The CSD method resulted in clear and distinguishable sources and sinks of the neuronal input activity in the STN after motor cortex stimulation. We showed that the center of the synaptic input of the STN from the motor cortex is located dorsal to the input from globus pallidus. Significance: For the first time we have performed CSD analysis on motor cortex stimulation evoked LFP responses in the rat STN as a proof of principle. Our results suggest that the CSD method can be used to gain new insights into the spatial extent of synaptic pathways in brain structures. PMID:27857684

  11. A filtering approach based on Gaussian-powerlaw convolutions for local PET verification of proton radiotherapy.

    PubMed

    Parodi, Katia; Bortfeld, Thomas

    2006-04-21

    Because proton beams activate positron emitters in patients, positron emission tomography (PET) has the potential to play a unique role in the in vivo verification of proton radiotherapy. Unfortunately, the PET image is not directly proportional to the delivered radiation dose distribution. Current treatment verification strategies using PET therefore compare the actual PET image with full-blown Monte Carlo simulations of the PET signal. In this paper, we describe a simpler and more direct way to reconstruct the expected PET signal from the local radiation dose distribution near the distal fall-off region, which is calculated by the treatment planning programme. Under reasonable assumptions, the PET image can be described as a convolution of the dose distribution with a filter function. We develop a formalism to derive the filter function analytically. The main concept is the introduction of 'Q' functions defined as the convolution of a Gaussian with a powerlaw function. Special Q functions are the Gaussian itself and the error function. The convolution of two Q functions is another Q function. By fitting elementary dose distributions and their corresponding PET signals with Q functions, we derive the Q function approximation of the filter. The new filtering method has been validated through comparisons with Monte Carlo calculations and, in one case, with measured data. While the basic concept is developed under idealized conditions assuming that the absorbing medium is homogeneous near the distal fall-off region, a generalization to inhomogeneous situations is also described. As a result, the method can determine the distal fall-off region of the PET signal, and consequently the range of the proton beam, with millimetre accuracy. Quantification of the produced activity is possible. In conclusion, the PET activity resulting from a proton beam treatment can be determined by locally filtering the dose distribution as obtained from the treatment planning system. The

  12. Locally Embedding Autoencoders: A Semi-Supervised Manifold Learning Approach of Document Representation

    PubMed Central

    Wei, Chao; Luo, Senlin; Ma, Xincheng; Ren, Hao; Zhang, Ji; Pan, Limin

    2016-01-01

    Topic models and neural networks can discover meaningful low-dimensional latent representations of text corpora; as such, they have become a key technology of document representation. However, such models presume all documents are non-discriminatory, resulting in latent representation dependent upon all other documents and an inability to provide discriminative document representation. To address this problem, we propose a semi-supervised manifold-inspired autoencoder to extract meaningful latent representations of documents, taking the local perspective that the latent representation of nearby documents should be correlative. We first determine the discriminative neighbors set with Euclidean distance in observation spaces. Then, the autoencoder is trained by joint minimization of the Bernoulli cross-entropy error between input and output and the sum of the square error between neighbors of input and output. The results of two widely used corpora show that our method yields at least a 15% improvement in document clustering and a nearly 7% improvement in classification tasks compared to comparative methods. The evidence demonstrates that our method can readily capture more discriminative latent representation of new documents. Moreover, some meaningful combinations of words can be efficiently discovered by activating features that promote the comprehensibility of latent representation. PMID:26784692

  13. Locally Embedding Autoencoders: A Semi-Supervised Manifold Learning Approach of Document Representation.

    PubMed

    Wei, Chao; Luo, Senlin; Ma, Xincheng; Ren, Hao; Zhang, Ji; Pan, Limin

    2016-01-01

    Topic models and neural networks can discover meaningful low-dimensional latent representations of text corpora; as such, they have become a key technology of document representation. However, such models presume all documents are non-discriminatory, resulting in latent representation dependent upon all other documents and an inability to provide discriminative document representation. To address this problem, we propose a semi-supervised manifold-inspired autoencoder to extract meaningful latent representations of documents, taking the local perspective that the latent representation of nearby documents should be correlative. We first determine the discriminative neighbors set with Euclidean distance in observation spaces. Then, the autoencoder is trained by joint minimization of the Bernoulli cross-entropy error between input and output and the sum of the square error between neighbors of input and output. The results of two widely used corpora show that our method yields at least a 15% improvement in document clustering and a nearly 7% improvement in classification tasks compared to comparative methods. The evidence demonstrates that our method can readily capture more discriminative latent representation of new documents. Moreover, some meaningful combinations of words can be efficiently discovered by activating features that promote the comprehensibility of latent representation.

  14. A Non-Local Fuzzy Segmentation Method: Application to Brain MRI

    NASA Astrophysics Data System (ADS)

    Caldairou, Benoît; Rousseau, François; Passat, Nicolas; Habas, Piotr; Studholme, Colin; Heinrich, Christian

    The Fuzzy C-Means algorithm is a widely used and flexible approach for brain tissue segmentation from 3D MRI. Despite its recent enrichment by addition of a spatial dependency to its formulation, it remains quite sensitive to noise. In order to improve its reliability in noisy contexts, we propose a way to select the most suitable example regions for regularisation. This approach inspired by the Non-Local Mean strategy used in image restoration is based on the computation of weights modelling the grey-level similarity between the neighbourhoods being compared. Experiments were performed on MRI data and results illustrate the usefulness of the approach in the context of brain tissue classification.

  15. Advanced Methods for Passive Acoustic Detection, Classification, and Localization of Marine Mammals

    DTIC Science & Technology

    2015-09-30

    marine mammal vocalizations and ultimately, in some cases, provide data for estimating the population density of the species present. In recent years...pose significant challenges. In this project, we are developing improved methods for detection, classification, and localization of many types of marine mammal sounds.

  16. Local Mesh Refinement in the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wu, Yuhui; Wang, Xiao-Yen; Yang, Vigor

    2000-01-01

    A local mesh refinement procedure for the CE/SE method which does not use an iterative procedure in the treatments of grid-to-grid communications is described. It is shown that a refinement ratio higher than ten can be applied successfully across a single coarse grid/fine grid interface.

  17. A variational approach to path planning in three dimensions using level set methods

    NASA Astrophysics Data System (ADS)

    Cecil, Thomas; Marthaler, Daniel E.

    2006-01-01

    In this paper we extend the two-dimensional methods set forth in [T. Cecil, D. Marthaler, A variational approach to search and path planning using level set methods, UCLA CAM Report, 04-61, 2004], proposing a variational approach to a path planning problem in three dimensions using a level set framework. After defining an energy integral over the path, we use gradient flow on the defined energy and evolve the entire path until a locally optimal steady state is reached. We follow the framework for motion of curves in three dimensions set forth in [P. Burchard, L.-T. Cheng, B. Merriman, S. Osher, Motion of curves in three spatial dimensions using a level set approach, J. Comput. Phys. 170(2) (2001) 720-741], modified appropriately to take into account that we allow for paths with positive, varying widths. Applications of this method extend to robotic motion and visibility problems, for example. Numerical methods and algorithms are given, and examples are presented.

  18. Baryon states with hidden charm in the extended local hidden gauge approach

    NASA Astrophysics Data System (ADS)

    Uchino, T.; Liang, Wei-Hong; Oset, E.

    2016-03-01

    The s -wave interaction of bar{D}Λ_c , bar{D} Σ_c , bar{D}^{ast}Λ_c , bar{D}^{ast}Σ_c and bar{D}Σ_c^{ast} , bar{D}^{ast}Σ_c^{ast} , is studied within a unitary coupled channels scheme with the extended local hidden gauge approach. In addition to the Weinberg-Tomozawa term, several additional diagrams via the pion exchange are also taken into account as box potentials. Furthermore, in order to implement the full coupled channels calculation, some of the box potentials which mix the vector-baryon and pseudoscalar-baryon sectors are extended to construct the effective transition potentials. As a result, we have observed six possible states in several angular momenta. Four of them correspond to two pairs of admixture states, two of bar{D}Σ_c-bar{D}^{ast}Σ_c with J = 1/2 , and two of bar{D}Σ_c^{ast} - bar{D}^{ast}Σ_c^{ast} with J = 3/2 . Moreover, we find a bar{D}^{ast}Σ_c resonance which couples to the bar{D}Λ_c channel and one spin degenerated bound state of bar{D}^{ast}Σ_c^{ast} with J = 1/2,5/2.

  19. Acoustic flight tests of rotorcraft noise-abatement approaches using local differential GPS guidance

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.; Hindson, William S.; Mueller, Arnold W.

    1995-01-01

    This paper presents the test design, instrumentation set-up, data acquisition, and the results of an acoustic flight experiment to study how noise due to blade-vortex interaction (BVI) may be alleviated. The flight experiment was conducted using the NASA/Army Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) research helicopter. A Local Differential Global Positioning System (LDGPS) was used for precision navigation and cockpit display guidance. A laser-based rotor state measurement system on board the aircraft was used to measure the main rotor tip-path-plane angle-of-attack. Tests were performed at Crows Landing Airfield in northern California with an array of microphones similar to that used in the standard ICAO/FAA noise certification test. The methodology used in the design of a RASCAL-specific, multi-segment, decelerating approach profile for BVI noise abatement is described, and the flight data pertaining to the flight technical errors and the acoustic data for assessing the noise reduction effectiveness are reported.

  20. Decays ω(782), φ(1020)→5π in the hidden local symmetry approach

    NASA Astrophysics Data System (ADS)

    Achasov, N. N.; Kozhevnikov, A. A.

    2003-10-01

    The decays ω→2π+2π-π0 and ω→π+π-3π0 are reconsidered in the hidden local symmetry approach (HLS) with added anomalous terms. The decay amplitudes are analyzed in detail, paying special attention to the Adler condition of the vanishing of the whole amplitude at a vanishing of momentum of any final pion. Combining the Okubo-Zweig-Iizuka rule applied to the five-pion final state with the Adler condition, we also calculate the φ→2π+2π-π0 and φ→π+π-3π0 decay amplitudes. The partial widths of the above decays are evaluated, and the excitation curves in e+e- annihilation are obtained, assuming specific reasonable relations among the parameters characterizing the anomalous terms of the HLS Lagrangian. The evaluated branching ratios Bφ→π+π-3π0≈2×10-7 and Bφ→2π+2π-π0≈7×10-7 are such that, with the luminosity L=500 pb-1 attained at the DAΦNE φ factory, one may already possess about 1685 events of the φ→5π decays.

  1. Baryons States with Hidden Charm in the Extended Local Hidden Gauge Approach

    NASA Astrophysics Data System (ADS)

    Uchino, Toshitaka; Liang, Wei-Hong; Oset, Eulogio

    The s-wave interaction of bar{D}Λ c, bar{D}Σ c, {bar{D}}nolimits*Λ c, {bar{D}}nolimits*Σ c and bar{D}Σ c*, {bar{D}}nolimits*Σ c*, is studied within a unitary coupled channels scheme with the extended local hidden gauge approach. In addition to the Weinberg-Tomozawa term, several additional diagrams via the pion-exchange are also taken into account as box potentials. Furthermore, in order to implement the full coupled channels calculation, some of the box potentials which mix the vector-baryon and pseudoscalar-baryon sectors are extended to construct the effective transition potentials. As a result, we have observed six possible states in several angular momenta. Four of them correspond to two pairs of admixture states, two of bar{D}Σ c - {bar{D}}nolimits*Σ c with J = 1/2, and two of bar{D}Σ c* - {bar{D}}nolimits*Σ c* with J = 3/2. Moreover, we find a {bar{D}}nolimits*Σ c resonance which couples to the bar{D}Λ c channel and one spin degenerated bound state of {bar{D}}nolimits*Σ c* with J = 1/2,5/2.

  2. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.

  3. Individual tree crown delineation using localized contour tree method and airborne LiDAR data in coniferous forests

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Yu, Bailang; Wu, Qiusheng; Huang, Yan; Chen, Zuoqi; Wu, Jianping

    2016-10-01

    Individual tree crown delineation is of great importance for forest inventory and management. The increasing availability of high-resolution airborne light detection and ranging (LiDAR) data makes it possible to delineate the crown structure of individual trees and deduce their geometric properties with high accuracy. In this study, we developed an automated segmentation method that is able to fully utilize high-resolution LiDAR data for detecting, extracting, and characterizing individual tree crowns with a multitude of geometric and topological properties. The proposed approach captures topological structure of forest and quantifies topological relationships of tree crowns by using a graph theory-based localized contour tree method, and finally segments individual tree crowns by analogy of recognizing hills from a topographic map. This approach consists of five key technical components: (1) derivation of canopy height model from airborne LiDAR data; (2) generation of contours based on the canopy height model; (3) extraction of hierarchical structures of tree crowns using the localized contour tree method; (4) delineation of individual tree crowns by segmenting hierarchical crown structure; and (5) calculation of geometric and topological properties of individual trees. We applied our new method to the Medicine Bow National Forest in the southwest of Laramie, Wyoming and the HJ Andrews Experimental Forest in the central portion of the Cascade Range of Oregon, U.S. The results reveal that the overall accuracy of individual tree crown delineation for the two study areas achieved 94.21% and 75.07%, respectively. Our method holds great potential for segmenting individual tree crowns under various forest conditions. Furthermore, the geometric and topological attributes derived from our method provide comprehensive and essential information for forest management.

  4. Method of Relaxation Moments for Studying Nonlinear Locally Nonequilibrium Processes of Transfer of Polymeric Systems

    NASA Astrophysics Data System (ADS)

    Popov, V. I.

    2015-01-01

    A method for simulating the processes of transfer of thermodynamic systems with polymeric microstructure is considered. The method is based on the classical locally equilibrium medium-state entropy concept expanded by the introduction of a structural tensor parameter whose evolution characterizes the nonlinear anisotropic relaxation properties of a thermodynamic system and the associated transfer phenomena. The dynamic, thermal, and mass transfer characteristics of macrotransfer are determined by corresponding integrals of relaxation moments.

  5. An Interdisciplinary Approach to Teaching Elementary School Children about Water Management and Local Government.

    ERIC Educational Resources Information Center

    Araboglou, Argy

    1993-01-01

    Asserts that students have little knowledge about the operation of local government. Discusses a three-day interdisciplinary lesson about water management and local government for the elementary grades. Includes descriptions of laboratory exercises, homework assignments, and class discussions. (CFR)

  6. Hierarchical leak detection and localization method in natural gas pipeline monitoring sensor networks.

    PubMed

    Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning

    2012-01-01

    In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate.

  7. Hierarchical Leak Detection and Localization Method in Natural Gas Pipeline Monitoring Sensor Networks

    PubMed Central

    Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning

    2012-01-01

    In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point’s position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464

  8. Biodiversity Monitoring at the Tonle Sap Lake of Cambodia: A Comparative Assessment of Local Methods

    NASA Astrophysics Data System (ADS)

    Seak, Sophat; Schmidt-Vogt, Dietrich; Thapa, Gopal B.

    2012-10-01

    This paper assesses local biodiversity monitoring methods practiced in the Tonle Sap Lake of Cambodia. For the assessment we used the following criteria: methodological rigor, perceived cost, ease of use (user friendliness), compatibility with existing activities, and effectiveness of intervention. Constraints and opportunities for execution of the methods were also considered. Information was collected by use of: (1) key informant interview, (2) focus group discussion, and (3) researcher's observation. The monitoring methods for fish, birds, reptiles, mammals and vegetation practiced in the research area have their unique characteristics of generating data on biodiversity and biological resources. Most of the methods, however, serve the purpose of monitoring biological resources rather than biodiversity. There is potential that the information gained through local monitoring methods can provide input for long-term management and strategic planning. In order to realize this potential, the local monitoring methods should be better integrated with each other, adjusted to existing norms and regulations, and institutionalized within community-based organization structures.

  9. Superconfiguration accounting approach versus average-atom model in local-thermodynamic-equilibrium highly ionized plasmas.

    PubMed

    Faussurier, G

    1999-06-01

    Statistical methods of describing and simulating complex ionized plasmas requires the development of reliable and computationally tractable models. In that spirit, we propose the screened-hydrogenic average atom, augmented with corrections resulting from fluctuations of the occupation probabilities around the mean-field equilibrium, as an approximation to calculate the grand potential and related statistical properties. Our main objective is to check the validity of this approach by comparing its predictions with those given by the superconfiguration accounting method. The latter is well-suited to this purpose. In effect, this method makes it possible to go beyond the mean-field model by using nonperturbative, analytic, and systematic techniques. Besides, it allows us to establish the relationship between the detailed configuration accounting and the average-atom methods. To our knowledge, this is the first time that the superconfiguration description has been used in this context. Finally, this study is also the occasion for presenting a powerful technique from analytic number theory to calculate superconfiguration averaged quantities.

  10. Evaluation and comparison of current biopsy needle localization and tracking methods using 3D ultrasound.

    PubMed

    Zhao, Yue; Shen, Yi; Bernard, Adeline; Cachard, Christian; Liebgott, Hervé

    2017-01-01

    This article compares four different biopsy needle localization algorithms in both 3D and 4D situations to evaluate their accuracy and execution time. The localization algorithms were: Principle component analysis (PCA), random Hough transform (RHT), parallel integral projection (PIP) and ROI-RK (ROI based RANSAC and Kalman filter). To enhance the contrast of the biopsy needle and background tissue, a line filtering pre-processing step was implemented. To make the PCA, RHT and PIP algorithms comparable with the ROI-RK method, a region of interest (ROI) strategy was added. Simulated and ex-vivo data were used to evaluate the performance of the different biopsy needle localization algorithms. The resolutions of the sectorial and cylindrical volumes were 0.3mm×0.4mm×0.6mmand0.1mm×0.1mm×0.2mm (axial×lateral×azimuthal) respectively. In so far as the simulation and experimental results show, the ROI-RK method successfully located and tracked the biopsy needle in both 3D and 4D situations. The tip localization error was within 1.5mm and the axis accuracy was within 1.6mm. To the best of our knowledge, considering both localization accuracy and execution time, the ROI-RK was the most stable and time-saving method. Normally, accuracy comes at the expense of time. However, the ROI-RK method was able to locate the biopsy needle with high accuracy in real time, which makes it a promising method for clinical applications.

  11. Nutrition and culture in professional football. A mixed method approach.

    PubMed

    Ono, Mutsumi; Kennedy, Eileen; Reeves, Sue; Cronin, Linda

    2012-02-01

    An adequate diet is essential for the optimal performance of professional football (soccer) players. Existing studies have shown that players fail to consume such a diet, without interrogating the reasons for this. The aim of this study was to explore the difficulties professional football players experience in consuming a diet for optimal performance. It utilized a mixed method approach, combining nutritional intake assessment with qualitative interviews, to ascertain both what was consumed and the wider cultural factors that affect consumption. The study found a high variability in individual intake which ranged widely from 2648 to 4606 kcal/day. In addition, the intake of carbohydrate was significantly lower than that recommended. The study revealed that the main food choices for carbohydrate and protein intake were pasta and chicken respectively. Interview results showed the importance of tradition within the world of professional football in structuring the players' approach to nutrition. In addition, the players' personal eating habits that derived from their class and national habitus restricted their food choice by conflicting with the dietary choices promoted within the professional football clubs.

  12. An Integrated Approach to Supplying the Local Table: Perceptions of Consumers, Producers, and Restaurateurs

    ERIC Educational Resources Information Center

    Wise, Dena; Sneed, Christopher; Velandia, Margarita; Berry, Ann; Rhea, Alice; Fairhurst, Ann

    2013-01-01

    The Local Table project compared results from parallel surveys of consumers and restaurateurs regarding local food purchasing and use. Results were also compared with producers' perception of, capacity for and participation in direct marketing through local venues, on-farm outlets, and restaurants. The surveys found consumers' and restaurateurs'…

  13. Vestibular Extension along with Frenectomy in Management of Localized Gingival Recession in Pediatric Patient: A New Innovative Surgical Approach.

    PubMed

    Jingarwar, Mahesh; Pathak, Anuradha; Bajwa, Navroop Kaur; Kalaskar, Ritesh

    2015-01-01

    This paper reports case of pediatric localized gingival recession (LGR) in mandibular anterior region which was treated by using new innovative surgical approach, i.e. combination of frenectomy and vestibular extension. These interceptive surgeries not only gained sufficient width of attached gingival but also lower the attachment of labial frenum. How to cite this article: Jingarwar M, Pathak A, Bajwa NK, Kalaskar R. Vestibular Extension along with Frenectomy in Management of Localized Gingival Recession in Pediatric Patient: A New Innovative Surgical Approach. Int J Clin Pediatr Dent 2015;8(3):224-226.

  14. Vestibular Extension along with Frenectomy in Management of Localized Gingival Recession in Pediatric Patient: A New Innovative Surgical Approach

    PubMed Central

    Pathak, Anuradha; Bajwa, Navroop Kaur; Kalaskar, Ritesh

    2015-01-01

    ABSTRACT This paper reports case of pediatric localized gingival recession (LGR) in mandibular anterior region which was treated by using new innovative surgical approach, i.e. combination of frenectomy and vestibular extension. These interceptive surgeries not only gained sufficient width of attached gingival but also lower the attachment of labial frenum. How to cite this article: Jingarwar M, Pathak A, Bajwa NK, Kalaskar R. Vestibular Extension along with Frenectomy in Management of Localized Gingival Recession in Pediatric Patient: A New Innovative Surgical Approach. Int J Clin Pediatr Dent 2015;8(3):224-226. PMID:26604542

  15. Effects of the decellularization method on the local stiffness of acellular lungs.

    PubMed

    Melo, Esther; Garreta, Elena; Luque, Tomas; Cortiella, Joaquin; Nichols, Joan; Navajas, Daniel; Farré, Ramon

    2014-05-01

    Lung bioengineering, a novel approach to obtain organs potentially available for transplantation, is based on decellularizing donor lungs and seeding natural scaffolds with stem cells. Various physicochemical protocols have been used to decellularize lungs, and their performance has been evaluated in terms of efficient decellularization and matrix preservation. No data are available, however, on the effect of different decellularization procedures on the local stiffness of the acellular lung. This information is important since stem cells directly sense the rigidity of the local site they are engrafting to during recellularization, and it has been shown that substrate stiffness modulates cell fate into different phenotypes. The aim of this study was to assess the effects of the decellularization procedure on the inhomogeneous local stiffness of the acellular lung on five different sites: alveolar septa, alveolar junctions, pleura, and vessels' tunica intima and tunica adventitia. Local matrix stiffness was measured by computing Young's modulus with atomic force microscopy after decellularizing the lungs of 36 healthy rats (Sprague-Dawley, male, 250-300 g) with four different protocols with/without perfusion through the lung circulatory system and using two different detergents (sodium dodecyl sulfate [SDS] and 3-[(3-cholamidopropyl) dimethylammonio]-1-propanesulfonate [CHAPS]). The local stiffness of the acellular lung matrix significantly depended on the site within the matrix (p<0.001), ranging from ∼ 15 kPa at the alveolar septum to ∼ 60 kPa at the tunica intima. Acellular lung stiffness (p=0.003) depended significantly, albeit modestly, on the decellularization process. Whereas perfusion did not induce any significant differences in stiffness, the use of CHAPS resulted in a ∼ 35% reduction compared with SDS, the influence of the detergent being more important in the tunica intima. In conclusion, lung matrix stiffness is considerably inhomogeneous, and

  16. A Historical Perspective on Local Environmental Movements in Japan: Lessons for the Transdisciplinary Approach on Water Resource Governance

    NASA Astrophysics Data System (ADS)

    Oh, T.

    2014-12-01

    Typical studies on natural resources from a social science perspective tend to choose one type of resource—water, for example— and ask what factors contribute to the sustainable use or wasteful exploitation of that resource. However, climate change and economic development, which are causing increased pressure on local resources and presenting communities with increased levels of tradeoffs and potential conflicts, force us to consider the trade-offs between options for using a particular resource. Therefore, the transdisciplinary approach that accurately captures the advantages and disadvantages of various possible resource uses is particularly important in the complex social-ecological systems, where concerns about inequality with respect to resource use and access have become unavoidable. Needless to say, resource management and policy require sound scientific understanding of the complex interconnections between nature and society, however, in contrast to typical international discussions, I discuss Japan not as an "advanced" case where various dilemmas have been successfully addressed by the government through the optimal use of technology, but rather as a nation seeing an emerging trend that is based on a awareness of the connections between local resources and the environment. Furthermore, from a historical viewpoint, the nexus of local resources is not a brand-new idea in the experience of environmental governance in Japan. There exist the local environment movements, which emphasized the interconnection of local resources and succeeded in urging the governmental action and policymaking. For this reason, local movements and local knowledge for the resource governance warrant attention. This study focuses on the historical cases relevant to water resource management including groundwater, and considers the contexts and conditions to holistically address local resource problems, paying particular attention to interactions between science and society. I

  17. A Computationally Efficient Meshless Local Petrov-Galerkin Method for Axisymmetric Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Chen, T.

    2003-01-01

    The Meshless Local Petrov-Galerkin (MLPG) method is one of the recently developed element-free methods. The method is convenient and can produce accurate results with continuous secondary variables, but is more computationally expensive than the finite element method. To overcome this disadvantage, a simple Heaviside test function is chosen. The computational effort is significantly reduced by eliminating the domain integral for the axisymmetric potential problems and by simplifying the domain integral for the axisymmetric elasticity problems. The method is evaluated through several patch tests for axisymmetric problems and example problems for which the exact solutions are available. The present method yielded very accurate solutions. The sensitivity of several parameters of the method is also studied.

  18. Analysing clinical reasoning characteristics using a combined methods approach

    PubMed Central

    2013-01-01

    Background Despite a major research focus on clinical reasoning over the last several decades, a method of evaluating the clinical reasoning process that is both objective and comprehensive is yet to be developed. The aim of this study was to test whether a dual approach, using two measures of clinical reasoning, the Clinical Reasoning Problem (CRP) and the Script Concordance Test (SCT), provides a valid, reliable and targeted analysis of clinical reasoning characteristics to facilitate the development of diagnostic thinking in medical students. Methods Three groups of participants, general practitioners, and third and fourth (final) year medical students completed 20 on-line clinical scenarios -10 in CRP and 10 in SCT format. Scores for each format were analysed for reliability, correlation between the two formats and differences between subject-groups. Results Cronbach’s alpha coefficient ranged from 0.36 for SCT 1 to 0.61 for CRP 2, Statistically significant correlations were found between the mean f-score of the CRP 2 and total SCT 2 score (0.69); and between the mean f-score for all CRPs and all mean SCT scores (0.57 and 0.47 respectively). The pass/fail rates of the SCT and CRP f-score are in keeping with the findings from the correlation analysis (i.e. 31% of students (11/35) passed both, 26% failed both, and 43% (15/35) of students passed one but not the other test), and suggest that the two formats measure overlapping but not identical characteristics. One-way ANOVA showed consistent differences in scores between levels of expertise with these differences being significant or approaching significance for the CRPs. Conclusion SCTs and CRPs are overlapping and complementary measures of clinical reasoning. Whilst SCTs are more efficient to administer, the use of both measures provides a more comprehensive appraisal of clinical skills than either single measure alone, and as such could potentially facilitate the customised teaching of clinical reasoning for

  19. Automatic localization of endoscope in intraoperative CT image: A simple approach to augmented reality guidance in laparoscopic surgery.

    PubMed

    Bernhardt, Sylvain; Nicolau, Stéphane A; Agnus, Vincent; Soler, Luc; Doignon, Christophe; Marescaux, Jacques

    2016-05-01

    The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data. We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach.

  20. A robust medical image segmentation method using KL distance and local neighborhood information.

    PubMed

    Zheng, Qian; Lu, Zhentai; Yang, Wei; Zhang, Minghui; Feng, Qianjin; Chen, Wufan

    2013-06-01

    In this paper, we propose an improved Chan-Vese (CV) model that uses Kullback-Leibler (KL) distances and local neighborhood information (LNI). Due to the effects of heterogeneity and complex constructions, the performance of level set segmentation is subject to confounding by the presence of nearby structures of similar intensity, preventing it from discerning the exact boundary of the object. Moreover, the CV model cannot usually obtain accurate results in medical image segmentation in cases of optimal configuration of controlling parameters, which requires substantial manual intervention. To overcome the above deficiency, we improve the segmentation accuracy by the usage of KL distance and LNI, thereby introducing the image local characteristics. Performance evaluation of the present method was achieved through experiments on the synthetic images and a series of real medical images. The extensive experimental results showed the superior performance of the proposed method over the state-of-the-art methods, in terms of both robustness and efficiency.

  1. Model-based Object Localization and Pose Estimation Method Robust Against the Stereo Miscorrespondence

    NASA Astrophysics Data System (ADS)

    Watanabe, Masaharu; Tomita, Fumiaki; Maruyama, Kenichi; Kawai, Yoshihiro; Fujimura, Kouta

    The miscorrespondence in stereo image analysis, which is caused by occlusion among images with failure in edge detection, often occurs in real factory environment, and this seriously disturbs the object localization and pose estimation. This work shows that, even under such conditions, the location and attitude of target object can precisely be measured, based on the three base-line trinocular stereo image analysis, using a “model-based verification” method, i.e., a model-based object recognition method including a multi-modal optimization algorithm. This method is suitable for real applications which need object localization and pose estimation, like a bin picking of parts randomly placed on factory automation.

  2. Method for the three-dimensional localization of intramyocardial excitation centers using optical imaging.

    PubMed

    Khait, Vadim D; Bernus, Olivier; Mironov, Sergey F; Pertsov, Arkady M

    2006-01-01

    This study explores the possibility of localizing the excitation centers of electrical waves inside the heart wall using voltage-sensitive dyes (fluorescent or absorptive). In the present study, we propose a method for the 3-D localization of excitation centers from pairs of 2-D images obtained in two modes of observation: reflection and transillumination. Such images can be obtained using high-speed charge-coupled device (CCD) cameras and photodiode arrays with time resolution up to 0.5 ms. To test the method, we simulate optical signals produced by point sources and propagating ellipsoidal waves in 1-cm-thick slabs of myocardial tissue. Solutions of the optical diffusion equation are constructed by employing the method of images with Robin boundary conditions. The coordinates of point sources as well as of the centers of expanding waves can be accurately determined using the proposed algorithm. The method can be extended to depth estimations of the outer boundaries of the expanding wave. The depth estimates are based on ratios of spatially integrated images. The method shows high tolerance to noise and can give accurate results even at relatively low signal-to-noise ratios. In conclusion, we propose a novel and efficient algorithm for the localization of excitation centers in 3-D cardiac tissue.

  3. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    PubMed

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies

  4. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation

    PubMed Central

    Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on

  5. Meshless Local Petrov-Galerkin Method for Shallow Shells with Functionally Graded and Orthotropic Material Properties

    NASA Astrophysics Data System (ADS)

    Sladek, J.; Sladek, V.; Zhang, Ch.

    2008-02-01

    A meshless local Petrov-Galerkin (MLPG) formulation is presented for analysis of shear deformable shallow shells with orthotropic material properties and continuously varying material properties through the shell thickness. Shear deformation of shells described by the Reissner theory is considered. Analyses of shells under static and dynamic loads are given here. For transient elastodynamic case the Laplace-transform is used to eliminate the time dependence of the field variables. A weak formulation with a unit test function transforms the set of the governing equations into local integral equations on local subdomains in the plane domain of the shell. The meshless approximation based on the Moving Least-Squares (MLS) method is employed for the implementation.

  6. General Method for Constructing Local Hidden Variable Models for Entangled Quantum States

    NASA Astrophysics Data System (ADS)

    Cavalcanti, D.; Guerini, L.; Rabelo, R.; Skrzypczyk, P.

    2016-11-01

    Entanglement allows for the nonlocality of quantum theory, which is the resource behind device-independent quantum information protocols. However, not all entangled quantum states display nonlocality. A central question is to determine the precise relation between entanglement and nonlocality. Here we present the first general test to decide whether a quantum state is local, and show that the test can be implemented by semidefinite programing. This method can be applied to any given state and for the construction of new examples of states with local hidden variable models for both projective and general measurements. As applications, we provide a lower-bound estimate of the fraction of two-qubit local entangled states and present new explicit examples of such states, including those that arise from physical noise models, Bell-diagonal states, and noisy Greenberger-Horne-Zeilinger and W states.

  7. Using tailored methodical approaches to achieve optimal science outcomes

    NASA Astrophysics Data System (ADS)

    Wingate, Lory M.

    2016-08-01

    The science community is actively engaged in research, development, and construction of instrumentation projects that they anticipate will lead to new science discoveries. There appears to be very strong link between the quality of the activities used to complete these projects, and having a fully functioning science instrument that will facilitate these investigations.[2] The combination of using internationally recognized standards within the disciplines of project management (PM) and systems engineering (SE) has been demonstrated to lead to achievement of positive net effects and optimal project outcomes. Conversely, unstructured, poorly managed projects will lead to unpredictable, suboptimal project outcomes ultimately affecting the quality of the science that can be done with the new instruments. The proposed application of these two specific methodical approaches, implemented as a tailorable suite of processes, are presented in this paper. Project management (PM) is accepted worldwide as an effective methodology used to control project cost, schedule, and scope. Systems engineering (SE) is an accepted method that is used to ensure that the outcomes of a project match the intent of the stakeholders, or if they diverge, that the changes are understood, captured, and controlled. An appropriate application, or tailoring, of these disciplines can be the foundation upon which success in projects that support science can be optimized.

  8. A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System

    PubMed Central

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-01-01

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972

  9. A True Delphi Approach: Developing a Tailored Curriculum in Response to Local Agriscience Need

    SciTech Connect

    Rubenstein, Eric; Thoron, Andrew; Burleson, Sarah

    2012-02-07

    The Delphi approach is a structured communication technique, developed as a systematic, interactive forecasting method which relies on a panel of experts. In this specific case experts from Industry, Education and Extension fields addressed needs for educational programs in a traditional agriculturally-based community, environmentally conscious practices in order to restore environmental integrity and multi-disciplinary approach to solve sustainability problems facing the agricultural industry. The experts were divided into two main groups, (A) Secondary and (B) Post-secondary, and answered questionnaires in three rounds: • 1st Round – Participants generated a list of knowledge, skills, and competencies followed • 2nd Round – Panelists rated each item • 3rd Round – Panelists were given the opportunity to combine and add additional items As a result, top six items from both groups were not found similar, secondary panelists centralized around employment skills and post-secondary panelists focused on content areas. Implications include a need for content-based curriculum for post-secondary graduates, utilization of true-Delphi technique for future curriculum development research and further examination of students that complete secondary and post-secondary programs in biofuels/sustainable agriculture.

  10. A novel crystallization method for visualizing the membrane localization of potassium channels.

    PubMed Central

    Lopatin, A N; Makhina, E N; Nichols, C G

    1998-01-01

    The high permeability of K+ channels to monovalent thallium (Tl+) ions and the low solubility of thallium bromide salt were used to develop a simple yet very sensitive approach to the study of membrane localization of potassium channels. K+ channels (Kir1.1, Kir2.1, Kir2.3, Kv2.1), were expressed in Xenopus oocytes and loaded with Br ions by microinjection. Oocytes were then exposed to extracellular thallium. Under conditions favoring influx of Tl+ ions (negative membrane potential under voltage clamp, or high concentration of extracellular Tl+), crystals of TlBr, visible under low-power microscopy, formed under the membrane in places of high density of K+ channels. Crystals were not formed in uninjected oocytes, but were formed in oocytes expressing as little as 5 microS K+ conductance. The number of observed crystals was much lower than the estimated number of functional channels. Based on the pattern of crystal formation, K+ channels appear to be expressed mostly around the point of cRNA injection when injected either into the animal or vegetal hemisphere. In addition to this pseudopolarized distribution of K+ channels due to localized microinjection of cRNA, a naturally polarized (animal/vegetal side) distribution of K+ channels was also frequently observed when K+ channel cRNA was injected at the equator. A second novel "agarose-hemiclamp" technique was developed to permit direct measurements of K+ currents from different hemispheres of oocytes under two-microelectrode voltage clamp. This technique, together with direct patch-clamping of patches of membrane in regions of high crystal density, confirmed that the localization of TlBr crystals corresponded to the localization of functional K+ channels and suggested a clustered organization of functional channels. With appropriate permeant ion/counterion pairs, this approach may be applicable to the visualization of the membrane distribution of any functional ion channel. PMID:9591643

  11. A fuzzy locally adaptive Bayesian segmentation approach for volume determination in PET.

    PubMed

    Hatt, Mathieu; Cheze le Rest, Catherine; Turzo, Alexandre; Roux, Christian; Visvikis, Dimitris

    2009-06-01

    Accurate volume estimation in positron emission tomography (PET) is crucial for different oncology applications. The objective of our study was to develop a new fuzzy locally adaptive Bayesian (FLAB) segmentation for automatic lesion volume delineation. FLAB was compared with a threshold approach as well as the previously proposed fuzzy hidden Markov chains (FHMC) and the fuzzy C-Means (FCM) algorithms. The performance of the algorithms was assessed on acquired datasets of the IEC phantom, covering a range of spherical lesion sizes (10-37 mm), contrast ratios (4:1 and 8:1), noise levels (1, 2, and 5 min acquisitions), and voxel sizes (8 and 64 mm(3)). In addition, the performance of the FLAB model was assessed on realistic nonuniform and nonspherical volumes simulated from patient lesions. Results show that FLAB performs better than the other methodologies, particularly for smaller objects. The volume error was 5%-15% for the different sphere sizes (down to 13 mm), contrast and image qualities considered, with a high reproducibility (variation < 4%). By comparison, the thresholding results were greatly dependent on image contrast and noise, whereas FCM results were less dependent on noise but consistently failed to segment lesions < 2 cm. In addition, FLAB performed consistently better for lesions < 2 cm in comparison to the FHMC algorithm. Finally the FLAB model provided errors less than 10% for nonspherical lesions with inhomogeneous activity distributions. Future developments will concentrate on an extension of FLAB in order to allow the segmentation of separate activity distribution regions within the same functional volume as well as a robustness study with respect to different scanners and reconstruction algorithms.

  12. Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches.

    PubMed

    Delerce, Sylvain; Dorado, Hugo; Grillon, Alexandre; Rebolledo, Maria Camila; Prager, Steven D; Patiño, Victor Hugo; Garcés Varón, Gabriel; Jiménez, Daniel

    2016-01-01

    Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of data mining for

  13. Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches

    PubMed Central

    Delerce, Sylvain; Dorado, Hugo; Grillon, Alexandre; Rebolledo, Maria Camila; Prager, Steven D.; Patiño, Victor Hugo; Garcés Varón, Gabriel; Jiménez, Daniel

    2016-01-01

    Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of data mining for

  14. Full-Field Strain Measurement On Titanium Welds And Local Elasto-Plastic Identification With The Virtual Fields Method

    SciTech Connect

    Tattoli, F.; Casavola, C.; Pierron, F.; Rotinat, R.; Pappalettere, C.

    2011-01-17

    One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto--plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.

  15. Full-Field Strain Measurement On Titanium Welds And Local Elasto-Plastic Identification With The Virtual Fields Method

    NASA Astrophysics Data System (ADS)

    Tattoli, F.; Pierron, F.; Rotinat, R.; Casavola, C.; Pappalettere, C.

    2011-01-01

    One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto—plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.

  16. Distortion correction in EPI using an extended PSF method with a reversed phase gradient approach.

    PubMed

    In, Myung-Ho; Posnansky, Oleg; Beall, Erik B; Lowe, Mark J; Speck, Oliver

    2015-01-01

    In echo-planar imaging (EPI), such as commonly used for functional MRI (fMRI) and diffusion-tensor imaging (DTI), compressed distortion is a more difficult challenge than local stretching as spatial information can be lost in strongly compressed areas. In addition, the effects are more severe at ultra-high field (UHF) such as 7T due to increased field inhomogeneity. To resolve this problem, two EPIs with opposite phase-encoding (PE) polarity were acquired and combined after distortion correction. For distortion correction, a point spread function (PSF) mapping method was chosen due to its high correction accuracy and extended to perform distortion correction of both EPIs with opposite PE polarity thus reducing the PSF reference scan time. Because the amount of spatial information differs between the opposite PE datasets, the method was further extended to incorporate a weighted combination of the two distortion-corrected images to maximize the spatial information content of a final corrected image. The correction accuracy of the proposed method was evaluated in distortion-corrected data using both forward and reverse phase-encoded PSF reference data and compared with the reversed gradient approaches suggested previously. Further we demonstrate that the extended PSF method with an improved weighted combination can recover local distortions and spatial information loss and be applied successfully not only to spin-echo EPI, but also to gradient-echo EPIs acquired with both PE directions to perform geometrically accurate image reconstruction.

  17. Local tolerance testing under REACH: Accepted non-animal methods are not on equal footing with animal tests.

    PubMed

    Sauer, Ursula G; Hill, Erin H; Curren, Rodger D; Raabe, Hans A; Kolle, Susanne N; Teubner, Wera; Mehling, Annette; Landsiedel, Robert

    2016-07-01

    In general, no single non-animal method can cover the complexity of any given animal test. Therefore, fixed sets of in vitro (and in chemico) methods have been combined into testing strategies for skin and eye irritation and skin sensitisation testing, with pre-defined prediction models for substance classification. Many of these methods have been adopted as OECD test guidelines. Various testing strategies have been successfully validated in extensive in-house and inter-laboratory studies, but they have not yet received formal acceptance for substance classification. Therefore, under the European REACH Regulation, data from testing strategies can, in general, only be used in so-called weight-of-evidence approaches. While animal testing data generated under the specific REACH information requirements are per se sufficient, the sufficiency of weight-of-evidence approaches can be questioned under the REACH system, and further animal testing can be required. This constitutes an imbalance between the regulatory acceptance of data from approved non-animal methods and animal tests that is not justified on scientific grounds. To ensure that testing strategies for local tolerance testing truly serve to replace animal testing for the REACH registration 2018 deadline (when the majority of existing chemicals have to be registered), clarity on their regulatory acceptance as complete replacements is urgently required.

  18. An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations

    NASA Astrophysics Data System (ADS)

    Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.

    2016-08-01

    In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.

  19. Approaching complexity by stochastic methods: From biological systems to turbulence

    NASA Astrophysics Data System (ADS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-09-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  20. A Local Order Parameter-Based Method for Simulation of Free Energy Barriers in Crystal Nucleation.

    PubMed

    Eslami, Hossein; Khanjari, Neda; Müller-Plathe, Florian

    2017-03-14

    While global order parameters have been widely used as reaction coordinates in nucleation and crystallization studies, their use in nucleation studies is claimed to have a serious drawback. In this work, a local order parameter is introduced as a local reaction coordinate to drive the simulation from the liquid phase to the solid phase and vice versa. This local order parameter holds information regarding the order in the first- and second-shell neighbors of a particle and has different well-defined values for local crystallites and disordered neighborhoods but is insensitive to the type of the crystal structure. The order parameter is employed in metadynamics simulations to calculate the solid-liquid phase equilibria and free energy barrier to nucleation. Our results for repulsive soft spheres and the Lennard-Jones potential, LJ(12-6), reveal better-resolved solid and liquid basins compared with the case in which a global order parameter is used. It is also shown that the configuration space is sampled more efficiently in the present method, allowing a more accurate calculation of the free energy barrier and the solid-liquid interfacial free energy. Another feature of the present local order parameter-based method is that it is possible to apply the bias potential to regions of interest in the order parameter space, for example, on the largest nucleus in the case of nucleation studies. In the present scheme for metadynamics simulation of the nucleation in supercooled LJ(12-6) particles, unlike the cases in which global order parameters are employed, there is no need to have an estimate of the size of the critical nucleus and to refine the results with the results of umbrella sampling simulations. The barrier heights and the nucleation pathway obtained from this method agree very well with the results of former umbrella sampling simulations.

  1. Experimental validation of normalized uniform load surface curvature method for damage localization.

    PubMed

    Jung, Ho-Yeon; Sung, Seung-Hoon; Jung, Hyung-Jo

    2015-10-16

    In this study, we experimentally validated the normalized uniform load surface (NULS) curvature method, which has been developed recently to assess damage localization in beam-type structures. The normalization technique allows for the accurate assessment of damage localization with greater sensitivity irrespective of the damage location. In this study, damage to a simply supported beam was numerically and experimentally investigated on the basis of the changes in the NULS curvatures, which were estimated from the modal flexibility matrices obtained from the acceleration responses under an ambient excitation. Two damage scenarios were considered for the single damage case as well as the multiple damages case by reducing the bending stiffness (EI) of the affected element(s). Numerical simulations were performed using MATLAB as a preliminary step. During the validation experiments, a series of tests were performed. It was found that the damage locations could be identified successfully without any false-positive or false-negative detections using the proposed method. For comparison, the damage detection performances were compared with those of two other well-known methods based on the modal flexibility matrix, namely, the uniform load surface (ULS) method and the ULS curvature method. It was confirmed that the proposed method is more effective for investigating the damage locations of simply supported beams than the two conventional methods in terms of sensitivity to damage under measurement noise.

  2. A Grounded Approach to Citizenship Education: Local Interplays between Government Institutions, Adult Schools, and Community Events in Sacramento, California

    ERIC Educational Resources Information Center

    Loring, Ariel

    2015-01-01

    Following a grounded, bottom-up approach to language policy (Blommaert 2009; Canagarajah 2005; McCarty, 2011; Ramanathan, 2005), this paper investigates available resources and discourses of citizenship in Sacramento, California to those situated within the citizenship infrastructure. It analyzes how the discursive framing of local and national…

  3. Reducing uncertainty in flood frequency analyses: A comparison of local and regional approaches involving information on extreme historical floods

    NASA Astrophysics Data System (ADS)

    Halbert, K.; Nguyen, C. C.; Payrastre, O.; Gaume, E.

    2016-10-01

    This paper proposes a detailed comparison of local and regional approaches for flood frequency analyses, with a special emphasis on the effects of (a) the information on extreme floods used in the analysis (historical data or recent extreme floods observed at ungauged sites), and (b) the assumptions associated with regional approaches (statistical homogeneity of considered series, independence of observations). The results presented are based on two case studies: the Ard e ̀ che and Argens rivers regions in south-east of France. Four approaches are compared: 1 - local analysis based on continuous measured series, 2 - local analysis with historical information, 3 - regional index-flood analysis based on continuous series, 4 - regional analysis involving information on extremes (including both historical floods and recent floods observed at ungauged sites). The inference approach used is based on a GEV distribution and a Bayesian Monte Carlo Markov Chain approach for parameters estimation. The comparison relies both on (1) available observed datasets and (2) Monte Carlo simulations in order to evaluate the effects of sampling variability and to analyze the possible influence of regional heterogeneities. The results indicate that a relatively limited level of regional heterogeneity, which may not be detected through homogeneity tests, may significantly affect the performances of regional approaches. These results also illustrate the added value of information on extreme floods, historical floods or recent floods observed at ungauged sites, in both local and regional approaches. As far as possible, gathering such information and incorporating it into flood frequency studies should be promoted. Finally, the presented Monte Carlo simulations appear as an interesting analysis tool for adapting the estimation strategy to the available data for each specific case study.

  4. Earth Observation and Indicators Pertaining to Determinants of Health- An Approach to Support Local Scale Characterization of Environmental Determinants of Vector-Borne Diseases

    NASA Astrophysics Data System (ADS)

    Kotchi, Serge Olivier; Brazeau, Stephanie; Ludwig, Antoinette; Aube, Guy; Berthiaume, Pilippe

    2016-08-01

    Environmental determinants (EVDs) were identified as key determinant of health (DoH) for the emergence and re-emergence of several vector-borne diseases. Maintaining ongoing acquisition of data related to EVDs at local scale and for large regions constitutes a significant challenge. Earth observation (EO) satellites offer a framework to overcome this challenge. However, EO image analysis methods commonly used to estimate EVDs are time and resource consuming. Moreover, variations of microclimatic conditions combined with high landscape heterogeneity limit the effectiveness of climatic variables derived from EO. In this study, we present what are DoH and EVDs, the impacts of EVDs on vector-borne diseases in the context of global environmental change, the need to characterize EVDs of vector-borne diseases at local scale and its challenges, and finally we propose an approach based on EO images to estimate at local scale indicators pertaining to EVDs of vector-borne diseases.

  5. Obtaining Highly Excited Eigenstates of Many-Body Localized Hamiltonians by the Density Matrix Renormalization Group Approach.

    PubMed

    Khemani, Vedika; Pollmann, Frank; Sondhi, S L

    2016-06-17

    The eigenstates of many-body localized (MBL) Hamiltonians exhibit low entanglement. We adapt the highly successful density-matrix renormalization group method, which is usually used to find modestly entangled ground states of local Hamiltonians, to find individual highly excited eigenstates of MBL Hamiltonians. The adaptation builds on the distinctive spatial structure of such eigenstates. We benchmark our method against the well-studied random field Heisenberg model in one dimension. At moderate to large disorder, the method successfully obtains excited eigenstates with high accuracy, thereby enabling a study of MBL systems at much larger system sizes than those accessible to exact-diagonalization methods.

  6. Obtaining Highly Excited Eigenstates of Many-Body Localized Hamiltonians by the Density Matrix Renormalization Group Approach

    NASA Astrophysics Data System (ADS)

    Khemani, Vedika; Pollmann, Frank; Sondhi, S. L.

    2016-06-01

    The eigenstates of many-body localized (MBL) Hamiltonians exhibit low entanglement. We adapt the highly successful density-matrix renormalization group method, which is usually used to find modestly entangled ground states of local Hamiltonians, to find individual highly excited eigenstates of MBL Hamiltonians. The adaptation builds on the distinctive spatial structure of such eigenstates. We benchmark our method against the well-studied random field Heisenberg model in one dimension. At moderate to large disorder, the method successfully obtains excited eigenstates with high accuracy, thereby enabling a study of MBL systems at much larger system sizes than those accessible to exact-diagonalization methods.

  7. [Systemic approach to ecologic safety at objects with radiation jeopardy, involved into localization of low and medium radioactive waste].

    PubMed

    Veselov, E I

    2011-01-01

    The article deals with specifying systemic approach to ecologic safety of objects with radiation jeopardy. The authors presented stages of work and algorithm of decisions on preserving reliability of storage for radiation jeopardy waste. Findings are that providing ecologic safety can cover 3 approaches: complete exemption of radiation jeopardy waste, removal of more dangerous waste from present buildings and increasing reliability of prolonged localization of radiation jeopardy waste at the initial place. The systemic approach presented could be realized at various radiation jeopardy objects.

  8. Local unitary transformation method toward practical electron correlation calculations with scalar relativistic effect in large-scale molecules.

    PubMed

    Seino, Junji; Nakai, Hiromi

    2013-07-21

    In order to perform practical electron correlation calculations, the local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012); and ibid. 137, 144101 (2012)], which is based on the locality of relativistic effects, has been combined with the linear-scaling divide-and-conquer (DC)-based Hartree-Fock (HF) and electron correlation methods, such as the second-order Mo̸ller-Plesset (MP2) and the coupled cluster theories with single and double excitations (CCSD). Numerical applications in hydrogen halide molecules, (HX)n (X = F, Cl, Br, and I), coinage metal chain systems, Mn (M = Cu and Ag), and platinum-terminated polyynediyl chain, trans,trans-{(p-CH3C6H4)3P}2(C6H5)Pt(C≡C)4Pt(C6H5){(p-CH3C6H4)3P}2, clarified that the present methods, namely DC-HF, MP2, and CCSD with the LUT-IODKH Hamiltonian, reproduce the results obtained using conventional methods with small computational costs. The combination of both LUT and DC techniques could be the first approach that achieves overall quasi-linear-scaling with a small prefactor for relativistic electron correlation calculations.

  9. Volume averaging: Local and nonlocal closures using a Green’s function approach

    NASA Astrophysics Data System (ADS)

    Wood, Brian D.; Valdés-Parada, Francisco J.

    2013-01-01

    Modeling transport phenomena in discretely hierarchical systems can be carried out using any number of upscaling techniques. In this paper, we revisit the method of volume averaging as a technique to pass from a microscopic level of description to a macroscopic one. Our focus is primarily on developing a more consistent and rigorous foundation for the relation between the microscale and averaged levels of description. We have put a particular focus on (1) carefully establishing statistical representations of the length scales used in volume averaging, (2) developing a time-space nonlocal closure scheme with as few assumptions and constraints as are possible, and (3) carefully identifying a sequence of simplifications (in terms of scaling postulates) that explain the conditions for which various upscaled models are valid. Although the approach is general for linear differential equations, we upscale the problem of linear convective diffusion as an example to help keep the discussion from becoming overly abstract. In our efforts, we have also revisited the concept of a closure variable, and explain how closure variables can be based on an integral formulation in terms of Green’s functions. In such a framework, a closure variable then represents the integration (in time and space) of the associated Green’s functions that describe the influence of the average sources over the spatial deviations. The approach using Green’s functions has utility not only in formalizing the method of volume averaging, but by clearly identifying how the method can be extended to transient and time or space nonlocal formulations. In addition to formalizing the upscaling process using Green’s functions, we also discuss the upscaling process itself in some detail to help foster improved understanding of how the process works. Discussion about the role of scaling postulates in the upscaling process is provided, and poised, whenever possible, in terms of measurable properties of (1) the

  10. Calculation of smooth potential energy surfaces using local electron correlation methods

    NASA Astrophysics Data System (ADS)

    Mata, Ricardo A.; Werner, Hans-Joachim

    2006-11-01

    The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl- with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barrier heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.

  11. Calculation of smooth potential energy surfaces using local electron correlation methods

    SciTech Connect

    Mata, Ricardo A.; Werner, Hans-Joachim

    2006-11-14

    The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barrier heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.

  12. Feasibility of A-mode ultrasound attenuation as a monitoring method of local hyperthermia treatment.

    PubMed

    Manaf, Noraida Abd; Aziz, Maizatul Nadwa Che; Ridzuan, Dzulfadhli Saffuan; Mohamad Salim, Maheza Irna; Wahab, Asnida Abd; Lai, Khin Wee; Hum, Yan Chai

    2016-06-01

    Recently, there is an increasing interest in the use of local hyperthermia treatment for a variety of clinical applications. The desired therapeutic outcome in local hyperthermia treatment is achieved by raising the local temperature to surpass the tissue coagulation threshold, resulting in tissue necrosis. In oncology, local hyperthermia is used as an effective way to destroy cancerous tissues and is said to have the potential to replace conventional treatment regime like surgery, chemotherapy or radiotherapy. However, the inability to closely monitor temperature elevations from hyperthermia treatment in real time with high accuracy continues to limit its clinical applicability. Local hyperthermia treatment requires real-time monitoring system to observe the progression of the destroyed tissue during and after the treatment. Ultrasound is one of the modalities that have great potential for local hyperthermia monitoring, as it is non-ionizing, convenient and has relatively simple signal processing requirement compared to magnetic resonance imaging and computed tomography. In a two-dimensional ultrasound imaging system, changes in tissue microstructure during local hyperthermia treatment are observed in terms of pixel value analysis extracted from the ultrasound image itself. Although 2D ultrasound has shown to be the most widely used system for monitoring hyperthermia in ultrasound imaging family, 1D ultrasound on the other hand could offer a real-time monitoring and the method enables quantitative measurement to be conducted faster and with simpler measurement instrument. Therefore, this paper proposes a new local hyperthermia monitoring method that is based on one-dimensional ultrasound. Specifically, the study investigates the effect of ultrasound attenuation in normal and pathological breast tissue when the temperature in tissue is varied between 37 and 65 °C during local hyperthermia treatment. Besides that, the total protein content measurement was also

  13. Local motion-compensated method for high-quality 3D coronary artery reconstruction

    PubMed Central

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-01-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741

  14. Local motion-compensated method for high-quality 3D coronary artery reconstruction.

    PubMed

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-12-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method.

  15. Digital sequences and a time reversal-based impact region imaging and localization method.

    PubMed

    Qiu, Lei; Yuan, Shenfang; Mei, Hanfei; Qian, Weifeng

    2013-10-01

    To reduce time and cost of damage inspection, on-line impact monitoring of aircraft composite structures is needed. A digital monitor based on an array of piezoelectric transducers (PZTs) is developed to record the impact region of impacts on-line. It is small in size, lightweight and has low power consumption, but there are two problems with the impact alarm region localization method of the digital monitor at the current stage. The first one is that the accuracy rate of the impact alarm region localization is low, especially on complex composite structures. The second problem is that the area of impact alarm region is large when a large scale structure is monitored and the number of PZTs is limited which increases the time and cost of damage inspections. To solve the two problems, an impact alarm region imaging and localization method based on digital sequences and time reversal is proposed. In this method, the frequency band of impact response signals is estimated based on the digital sequences first. Then, characteristic signals of impact response signals are constructed by sinusoidal modulation signals. Finally, the phase synthesis time reversal impact imaging method is adopted to obtain the impact region image. Depending on the image, an error ellipse is generated to give out the final impact alarm region. A validation experiment is implemented on a complex composite wing box of a real aircraft. The validation results show that the accuracy rate of impact alarm region localization is approximately 100%. The area of impact alarm region can be reduced and the number of PZTs needed to cover the same impact monitoring region is reduced by more than a half.

  16. A method of analysis of distributions of local electric fields in composites

    NASA Astrophysics Data System (ADS)

    Kolesnikov, V. I.; Yakovlev, V. B.; Bardushkin, V. V.; Lavrov, I. V.; Sychev, A. P.; Yakovleva, E. N.

    2016-03-01

    A method of prediction of distributions of local electric fields in composite media based on analysis of the tensor operators of the concentration of intensity and induction is proposed. Both general expressions and the relations for calculating these operators are obtained in various approximations. The analytical expressions are presented for the operators of the concentration of electric fields in various types of inhomogeneous structures obtained in the generalized singular approximation.

  17. Advanced Methods for Passive Acoustic Detection, Classification, and Localization of Marine Mammals

    DTIC Science & Technology

    2014-09-30

    in the case of aerial surveys, significantly dangerous . In both the areas critical to the Navy and in other areas critical to marine mammals, PAM... animal calls via hyperbolic methods, Journal of the Acoustical Society of merica 97, 3352–3353 (1995). Morrissey, R. P., J. Ward, N. DiMarzio, S... animal as it follows its prey just prior to capture. Figure 6: Example of tracking highly ambiguous localizations. 15 Figure 7

  18. Digital Sequences and a Time Reversal-Based Impact Region Imaging and Localization Method

    PubMed Central

    Qiu, Lei; Yuan, Shenfang; Mei, Hanfei; Qian, Weifeng

    2013-01-01

    To reduce time and cost of damage inspection, on-line impact monitoring of aircraft composite structures is needed. A digital monitor based on an array of piezoelectric transducers (PZTs) is developed to record the impact region of impacts on-line. It is small in size, lightweight and has low power consumption, but there are two problems with the impact alarm region localization method of the digital monitor at the current stage. The first one is that the accuracy rate of the impact alarm region localization is low, especially on complex composite structures. The second problem is that the area of impact alarm region is large when a large scale structure is monitored and the number of PZTs is limited which increases the time and cost of damage inspections. To solve the two problems, an impact alarm region imaging and localization method based on digital sequences and time reversal is proposed. In this method, the frequency band of impact response signals is estimated based on the digital sequences first. Then, characteristic signals of impact response signals are constructed by sinusoidal modulation signals. Finally, the phase synthesis time reversal impact imaging method is adopted to obtain the impact region image. Depending on the image, an error ellipse is generated to give out the final impact alarm region. A validation experiment is implemented on a complex composite wing box of a real aircraft. The validation results show that the accuracy rate of impact alarm region localization is approximately 100%. The area of impact alarm region can be reduced and the number of PZTs needed to cover the same impact monitoring region is reduced by more than a half. PMID:24084123

  19. Graph-Theoretic Statistical Methods for Detecting and Localizing Distributional Change in Multivariate Data

    DTIC Science & Technology

    2015-06-01

    THEORETIC STATISTICAL METHODS FOR DETECTING AND LOCALIZING DISTRIBUTIONAL CHANGE IN MULTIVARIATE DATA by Matthew A. Hawks June 2015...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this...DISTRIBUTIONAL CHANGE IN MULTIVARIATE DATA 5. FUNDING NUMBERS 6. AUTHOR(S) Hawks, Matthew A. 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  20. Method to repair localized amplitude defects in a EUV lithography mask blank

    DOEpatents

    Stearns, Daniel G.; Sweeney, Donald W.; Mirkarimi, Paul B.; Chapman, Henry N.

    2005-11-22

    A method and apparatus are provided for the repair of an amplitude defect in a multilayer coating. A significant number of layers underneath the amplitude defect are undamaged. The repair technique restores the local reflectivity of the coating by physically removing the defect and leaving a wide, shallow crater that exposes the underlying intact layers. The particle, pit or scratch is first removed the remaining damaged region is etched away without disturbing the intact underlying layers.

  1. A local constitutive model for the discrete element method. Application to geomaterials and concrete

    NASA Astrophysics Data System (ADS)

    Oñate, Eugenio; Zárate, Francisco; Miquel, Juan; Santasusana, Miquel; Celigueta, Miguel Angel; Arrufat, Ferran; Gandikota, Raju; Valiullin, Khaydar; Ring, Lev

    2015-06-01

    This paper presents a local constitutive model for modelling the linear and non linear behavior of soft and hard cohesive materials with the discrete element method (DEM). We present the results obtained in the analysis with the DEM of cylindrical samples of cement, concrete and shale rock materials under a uniaxial compressive strength test, different triaxial tests, a uniaxial strain compaction test and a Brazilian tensile strength test. DEM results compare well with the experimental values in all cases.

  2. A reliable acoustic path: Physical properties and a source localization method

    NASA Astrophysics Data System (ADS)

    Duan, Rui; Yang, Kun-De; Ma, Yuan-Liang; Lei, Bo

    2012-12-01

    The physical properties of a reliable acoustic path (RAP) are analysed and subsequently a weighted-subspace-fitting matched field (WSF-MF) method for passive localization is presented by exploiting the properties of the RAP environment. The RAP is an important acoustic duct in the deep ocean, which occurs when the receiver is placed near the bottom where the sound velocity exceeds the maximum sound velocity in the vicinity of the surface. It is found that in the RAP environment the transmission loss is rather low and no blind zone of surveillance exists in a medium range. The ray theory is used to explain these phenomena. Furthermore, the analysis of the arrival structures shows that the source localization method based on arrival angle is feasible in this environment. However, the conventional methods suffer from the complicated and inaccurate estimation of the arrival angle. In this paper, a straightforward WSF-MF method is derived to exploit the information about the arrival angles indirectly. The method is to minimize the distance between the signal subspace and the spanned space by the array manifold in a finite range-depth space rather than the arrival-angle space. Simulations are performed to demonstrate the features of the method, and the results are explained by the arrival structures in the RAP environment.

  3. State-Based Curriculum-Making: Approaches to Local Curriculum Work in Norway and Finland

    ERIC Educational Resources Information Center

    Mølstad, Christina Elde

    2015-01-01

    This article investigates how state authorities in Norway and Finland design national curriculum to provide different policy conditions for local curriculum work in municipalities and schools. The topic is explored by comparing how national authorities in Norway and Finland create a scope for local curriculum. The data consist of interviews with…

  4. A Multi-Media Approach to Teaching Local Government on the Secondary Level.

    ERIC Educational Resources Information Center

    McTeer, J. Hugh; Jackson, Barry N.

    The document offers numerous examples of using the media in teaching local government and explains how to create appropriate materials. Suggestions are to collect news from newspapers and to record news and forum discussions on local radio stations. Forms and pamphlets can be collected from offices of the court clerk, probate judge, tax…

  5. The local properties of ocean surface waves by the phase-time method

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.; Long, Steven R.; Tung, Chi-Chao; Donelan, Mark A.; Yuan, Yeli; Lai, Ronald J.

    1992-01-01

    A new approach using phase information to view and study the properties of frequency modulation, wave group structures, and wave breaking is presented. The method is applied to ocean wave time series data and a new type of wave group (containing the large 'rogue' waves) is identified. The method also has the capability of broad applications in the analysis of time series data in general.

  6. Uni-centric localization of Castleman's disease treated with laparoscopic and traditional approach. Report of two cases.

    PubMed

    Tartaglia, F; Blasi, S; Berni, A; Sgueglia, M; Polichetti, P; Maturo, A; Palazzini, G; Tromba, L; Campana, F P

    2008-10-01

    Castleman's disease (CD) is a rare lymphoproliferative disorder. Clinically CD has been subdivided in two forms: uni-centric and multicentric. The uni-centric type is limited to a single anatomic lymph-node-bearing region. The present report describes two cases of uni-centric CD: the first was an abdominal localization treated with a laparoscopic approach; the second was a submaxillary localization treated with a classical approach. In case 1 the laparoscopic approach permitted to reach diagnosis, not clear after diagnostic imaging procedures, and enabled a total and excellent resolution of the pathology because our patient, after eight months of follow up, has had no evidence of recurrence of the disease. In case 2 we want to highlight that CD should be considered in the differential diagnosis of a solitary neck mass and that the surgical treatment is diagnostic and curative at the same time.

  7. Acoustic emission source localization in thin metallic plates: A single-sensor approach based on multimodal edge reflections.

    PubMed

    Ebrahimkhanlou, A; Salamone, S

    2017-03-14

    This paper presents a new acoustic emission (AE) source localization for isotropic plates with reflecting boundaries. This approach that has no blind spot leverages multimodal edge reflections to identify AE sources with only a single sensor. The implementation of the proposed approach involves three main steps. First, the continuous wavelet transform (CWT) and the dispersion curves of the fundamental Lamb wave modes are utilized to estimate the distance between an AE source and a sensor. This step uses a modal acoustic emission approach. Then, an analytical model is proposed that uses the estimated distances to simulate the edge-reflected waves. Finally, the correlation between the experimental and the simulated waveforms is used to estimate the location of AE sources. Hsu-Nielsen pencil lead break (PLB) tests were performed on an aluminum plate to validate this algorithm and promising results were achieved. Based on these results, the paper reports the statistics of the localization errors.

  8. A multilevel local discrete convolution method for the numerical solution for Maxwell's Equations

    NASA Astrophysics Data System (ADS)

    Lo, Boris; Colella, Phillip

    2016-10-01

    We present a new discrete multilevel local discrete convolution method for solving Maxwell's equations in three dimensions. We obtain an explicit real-space representation for the propagator of an auxiliary system of differential equations with initial value constraints that is equivalent to Maxwell's equations. The propagator preserves finite speed of propagation and source locality. Because the propagator involves convolution against a singular distribution, we regularize via convolution with smoothing kernels (B-splines) prior to sampling. We have shown that the ultimate discrete convolutional propagator can be constructed to attain an arbitrarily high order of accuracy by using higher-order regularizing kernels and finite difference stencils. The discretized propagator is compactly supported and can be applied using Hockney's method (1970) and parallelized using domain decomposition, leading to a method that is computationally efficient. The algorithm is extended to work for locally refined fixed hierarchy of rectangular grids. This research is supported by the Office of Advanced Scientific Computing Research of the US Department of Energy under Contract Number DE-AC02-05CH11231.

  9. Cumulative Risk Assessment Toolbox: Methods and Approaches for the Practitioner

    PubMed Central

    MacDonell, Margaret M.; Haroun, Lynne A.; Teuschler, Linda K.; Rice, Glenn E.; Hertzberg, Richard C.; Butler, James P.; Chang, Young-Soo; Clark, Shanna L.; Johns, Alan P.; Perry, Camarie S.; Garcia, Shannon S.; Jacobi, John H.; Scofield, Marcienne A.

    2013-01-01

    The historical approach to assessing health risks of environmental chemicals has been to evaluate them one at a time. In fact, we are exposed every day to a wide variety of chemicals and are increasingly aware of potential health implications. Although considerable progress has been made in the science underlying risk assessments for real-world exposures, implementation has lagged because many practitioners are unaware of methods and tools available to support these analyses. To address this issue, the US Environmental Protection Agency developed a toolbox of cumulative risk resources for contaminated sites, as part of a resource document that was published in 2007. This paper highlights information for nearly 80 resources from the toolbox and provides selected updates, with practical notes for cumulative risk applications. Resources are organized according to the main elements of the assessment process: (1) planning, scoping, and problem formulation; (2) environmental fate and transport; (3) exposure analysis extending to human factors; (4) toxicity analysis; and (5) risk and uncertainty characterization, including presentation of results. In addition to providing online access, plans for the toolbox include addressing nonchemical stressors and applications beyond contaminated sites and further strengthening resource accessibility to support evolving analyses for cumulative risk and sustainable communities. PMID:23762048

  10. Source localization of turboshaft engine broadband noise using a three-sensor coherence method

    NASA Astrophysics Data System (ADS)

    Blacodon, Daniel; Lewy, Serge

    2015-03-01

    Turboshaft engines can become the main source of helicopter noise at takeoff. Inlet radiation mainly comes from the compressor tones, but aft radiation is more intricate: turbine tones usually are above the audible frequency range and do not contribute to the weighted sound levels; jet is secondary and radiates low noise levels. A broadband component is the most annoying but its sources are not well known (it is called internal or core noise). Present study was made in the framework of the European project TEENI (Turboshaft Engine Exhaust Noise Identification). Its main objective was to localize the broadband sources in order to better reduce them. Several diagnostic techniques were implemented by the various TEENI partners. As regards ONERA, a first attempt at separating sources was made in the past with Turbomeca using a three-signal coherence method (TSM) to reject background non-acoustic noise. The main difficulty when using TSM is the assessment of the frequency range where the results are valid. This drawback has been circumvented in the TSM implemented in TEENI. Measurements were made on a highly instrumented Ardiden turboshaft engine in the Turbomeca open-air test bench. Two engine powers (approach and takeoff) were selected to apply TSM. Two internal pressure probes were located in various cross-sections, either behind the combustion chamber (CC), the high-pressure turbine (HPT), the free-turbine first stage (TL), or in four nozzle sections. The third transducer was a far-field microphone located around the maximum of radiation, at 120° from the intake centerline. The key result is that coherence increases from CC to HPT and TL, then decreases in the nozzle up to the exit. Pressure fluctuations from HPT and TL are very coherent with the far-field acoustic spectra up to 700 Hz. They are thus the main acoustic source and can be attributed to indirect combustion noise (accuracy decreases above 700 Hz because coherence is lower, but far-field sound spectra

  11. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    PubMed

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  12. [Application of acupoint anatomy localization method with colorful tube in education of acupoint anatomy].

    PubMed

    Song, Shi-Lin

    2013-04-01

    To seek a precise and simple method for localization of acupoint in anatomical experiment teaching. Medical bone needles were inserted into acupoints. Then, self-mode copper probe needles were thrust along the center of the bone needles to open the inner structures of acuppoints. And probe needles were replaced by colored plastic tubes. Finally, bone needles were withdrawn so as to fix the plastic tubes into the acupoints to facilitate the later cutting. This method for acupoint anatomic positioning is of low cost with accurate positioning and simple manipulation, which has advantages in strong experimental and innovative values.

  13. A tailing genome walking method suitable for genomes with high local GC content.

    PubMed

    Liu, Taian; Fang, Yongxiang; Yao, Wenjuan; Guan, Qisai; Bai, Gang; Jing, Zhizhong

    2013-10-15

    The tailing genome walking strategies are simple and efficient. However, they sometimes can be restricted due to the low stringency of homo-oligomeric primers. Here we modified their conventional tailing step by adding polythymidine and polyguanine to the target single-stranded DNA (ssDNA). The tailed ssDNA was then amplified exponentially with a specific primer in the known region and a primer comprising 5' polycytosine and 3' polyadenosine. The successful application of this novel method for identifying integration sites mediated by φC31 integrase in goat genome indicates that the method is more suitable for genomes with high complexity and local GC content.

  14. On the equivalence between traction- and stress-based approaches for the modeling of localized failure in solids

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Ying; Cervera, Miguel

    2015-09-01

    This work investigates systematically traction- and stress-based approaches for the modeling of strong and regularized discontinuities induced by localized failure in solids. Two complementary methodologies, i.e., discontinuities localized in an elastic solid and strain localization of an inelastic softening solid, are addressed. In the former it is assumed a priori that the discontinuity forms with a continuous stress field and along the known orientation. A traction-based failure criterion is introduced to characterize the discontinuity and the orientation is determined from Mohr's maximization postulate. If the displacement jumps are retained as independent variables, the strong/regularized discontinuity approaches follow, requiring constitutive models for both the bulk and discontinuity. Elimination of the displacement jumps at the material point level results in the embedded/smeared discontinuity approaches in which an overall inelastic constitutive model fulfilling the static constraint suffices. The second methodology is then adopted to check whether the assumed strain localization can occur and identify its consequences on the resulting approaches. The kinematic constraint guaranteeing stress boundedness and continuity upon strain localization is established for general inelastic softening solids. Application to a unified stress-based elastoplastic damage model naturally yields all the ingredients of a localized model for the discontinuity (band), justifying the first methodology. Two dual but not necessarily equivalent approaches, i.e., the traction-based elastoplastic damage model and the stress-based projected discontinuity model, are identified. The former is equivalent to the embedded and smeared discontinuity approaches, whereas in the later the discontinuity orientation and associated failure criterion are determined consistently from the kinematic constraint rather than given a priori. The bi-directional connections and equivalence conditions

  15. Determination of local optical properties of the rat barrel cortex during neural activation: Monte-Carlo approach to light propagation

    NASA Astrophysics Data System (ADS)

    Migacheva, E. V.; Chamot, S. R.; Seydoux, O.; Weber, B.; Depeursinge, C.; Marquet, P.; Magistretti, P. J.

    2010-04-01

    Spatially-spectrally-resolved reflectance measurements allow in vivo measuring the optical coefficients of absorption and scattering within the cortical tissue. This method, if applied to neural tissue during enhanced activity, could allow a straightforward monitoring of the blood oxygen saturation changes occurring in the brain cortex during hemodynamic responses. Furthermore, it may provide valuable information on possible absorption and scattering changes occurring during stimulation. The feasibility of such measurements was investigated by carrying a preliminary numerical study using a Monte-Carlo light propagation routine. Experimental parameters such as the geometry of the optical probe, baseline cortex optical coefficients retrieved from the literature and anatomical characteristics of the rat barrel cortex were used as an input for the simulations. The sensitivity of the probe to the local variations of optical coefficients was investigated with this numerical approach. Additionally, the influence of the barrel cortex dimensions and the probe positioning relatively to the activated region were studied for instrumental optimization purpose. It was found that typical variations of optical coefficients can be detected if the activated region of barrel cortex has a volume of typically 1 mm3 or larger. The decay of the probe sensitivity to changes was studied as a function of the depth of the activated region. The results showed that the best sensitivity is achieved by placing the light injection fiber of the optical probe aligned onto the center of the cylindrical barrel.

  16. A Study on an In-Process Laser Localized Pre-Deposition Heating Approach to Reducing FDM Part Anisotropy

    NASA Astrophysics Data System (ADS)

    Kurapatti Ravi, Abinesh

    Material extrusion based rapid prototyping systems have been used to produce prototypes for several years. They have been quite important in the additive manufacturing field, and have gained popularity in research, development and manufacturing in a wide field of applications. There has been a lot of interest in using these technologies to produce end use parts, and Fused Deposition Modeling (FDM) has gained traction in leading the transition of rapid prototyping technologies to rapid manufacturing. But parts built with the FDM process exhibit property anisotropy. Many studies have been conducted into process optimization, material properties and even post processing of parts, but were unable to solve the strength anisotropy issue. To address this, an optical heating system has been proposed to achieve localized heating of the pre- deposition surface prior to material deposition over the heated region. This occurs in situ within the build process, and aims to increase the interface temperature to above glass transition (Tg), to trigger an increase in polymer chain diffusion, and in extension, increase the strength of the part. An increase in flexural strength by 95% at the layer interface has been observed when the optical heating method was implemented, thereby improving property isotropy of the FDM part. This approach can be designed to perform real time control of inter-filament and interlayer temperatures across the build volume of a part, and can be tuned to achieve required mechanical properties.

  17. A Context-Recognition-Aided PDR Localization Method Based on the Hidden Markov Model

    PubMed Central

    Lu, Yi; Wei, Dongyan; Lai, Qifeng; Li, Wen; Yuan, Hong

    2016-01-01

    Indoor positioning has recently become an important field of interest because global navigation satellite systems (GNSS) are usually unavailable in indoor environments. Pedestrian dead reckoning (PDR) is a promising localization technique for indoor environments since it can be implemented on widely used smartphones equipped with low cost inertial sensors. However, the PDR localization severely suffers from the accumulation of positioning errors, and other external calibration sources should be used. In this paper, a context-recognition-aided PDR localization model is proposed to calibrate PDR. The context is detected by employing particular human actions or characteristic objects and it is matched to the context pre-stored offline in the database to get the pedestrian’s location. The Hidden Markov Model (HMM) and Recursive Viterbi Algorithm are used to do the matching, which reduces the time complexity and saves the storage. In addition, the authors design the turn detection algorithm and take the context of corner as an example to illustrate and verify the proposed model. The experimental results show that the proposed localization method can fix the pedestrian’s starting point quickly and improves the positioning accuracy of PDR by 40.56% at most with perfect stability and robustness at the same time. PMID:27916922

  18. "PERFEXT": a direct method for quantitative assessment of cytokine production in vivo at the local level.

    PubMed

    Villavedra, M; Carol, H; Hjulström, M; Holmgren, J; Czerkinsky, C

    1997-05-01

    A method termed "PERFEXT", based on sequential perfusion and detergent extraction of lymphoid and non-lymphoid organs, has been developed for the quantitative measurement of cytokines produced at a local level in a given tissue. In vivo treatment of mice with Staphylococcus enterotoxin B (SEB) or lipopolysaccharide (LPS) served as the model systems. Interleukin-2 (IL2) and interferon-gamma (IFN gamma) levels were monitored by ELISA analysis of extracted samples. After local footpad (FP) injection with SEB, spleen and serum IL2 levels peaked at 2-4 h, while IL2 levels peaked at around 4-8 h in both FP and popliteal lymph nodes. SEB injection resulted in increased IFN gamma levels both in the FP and the draining lymph node. The detection of cytokines in the intestine allows for the application of the method at mucosal sites as well, provided enzyme inhibitors are present during the extraction procedure. After FP injection with LPS, IFN gamma production was significantly increased in the draining lymph node and was detectable in the FP, whereas IL2 was undetectable in any organ examined. IL2 and IFN gamma could also be detected at the site of elicitation of a delayed-type hypersensitivity reaction following local FP challenge. Local cytokine production correlated with the swelling response, whereas cytokine production in the spleen did not. IL2 peaked early, followed by a late increase in IFN gamma production, corresponding to the maximum swelling. This simple method should prove useful for analysing the production of other cytokines in vivo in distinct anatomical compartments.

  19. A locally stabilized immersed boundary method for the compressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Brehm, C.; Hader, C.; Fasel, H. F.

    2015-08-01

    A higher-order immersed boundary method for solving the compressible Navier-Stokes equations is presented. The distinguishing feature of this new immersed boundary method is that the coefficients of the irregular finite-difference stencils in the vicinity of the immersed boundary are optimized to obtain improved numerical stability. This basic idea was introduced in a previous publication by the authors for the advection step in the projection method used to solve the incompressible Navier-Stokes equations. This paper extends the original approach to the compressible Navier-Stokes equations considering flux vector splitting schemes and viscous wall boundary conditions at the immersed geometry. In addition to the stencil optimization procedure for the convective terms, this paper discusses other key aspects of the method, such as imposing flux boundary conditions at the immersed boundary and the discretization of the viscous flux in the vicinity of the boundary. Extensive linear stability investigations of the immersed scheme confirm that a linearly stable method is obtained. The method of manufactured solutions is used to validate the expected higher-order accuracy and to study the error convergence properties of this new method. Steady and unsteady, 2D and 3D canonical test cases are used for validation of the immersed boundary approach. Finally, the method is employed to simulate the laminar to turbulent transition process of a hypersonic Mach 6 boundary layer flow over a porous wall and subsonic boundary layer flow over a three-dimensional spherical roughness element.

  20. On the local optimal solutions of metabolic regulatory networks using information guided genetic algorithm approach and clustering analysis.

    PubMed

    Zheng, Ying; Yeh, Chen-Wei; Yang, Chi-Da; Jang, Shi-Shang; Chu, I-Ming

    2007-08-31

    Biological information generated by high-throughput technology has made systems approach feasible for many biological problems. By this approach, optimization of metabolic pathway has been successfully applied in the amino acid production. However, in this technique, gene modifications of metabolic control architecture as well as enzyme expression levels are coupled and result in a mixed integer nonlinear programming problem. Furthermore, the stoichiometric complexity of metabolic pathway, along with strong nonlinear behaviour of the regulatory kinetic models, directs a highly rugged contour in the whole optimization problem. There may exist local optimal solutions wherein the same level of production through different flux distributions compared with global optimum. The purpose of this work is to develop a novel stochastic optimization approach-information guided genetic algorithm (IGA) to discover the local optima with different levels of modification of the regulatory loop and production rates. The novelties of this work include the information theory, local search, and clustering analysis to discover the local optima which have physical meaning among the qualified solutions.

  1. A universal approach for automatic organ segmentations on 3D CT images based on organ localization and 3D GrabCut

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Ito, Takaaki; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2014-03-01

    This paper describes a universal approach to automatic segmentation of different internal organ and tissue regions in three-dimensional (3D) computerized tomography (CT) scans. The proposed approach combines object localization, a probabilistic atlas, and 3D GrabCut techniques to achieve automatic and quick segmentation. The proposed method first detects a tight 3D bounding box that contains the target organ region in CT images and then estimates the prior of each pixel inside the bounding box belonging to the organ region or background based on a dynamically generated probabilistic atlas. Finally, the target organ region is separated from the background by using an improved 3D GrabCut algorithm. A machine-learning method is used to train a detector to localize the 3D bounding box of the target organ using template matching on a selected feature space. A content-based image retrieval method is used for online generation of a patient-specific probabilistic atlas for the target organ based on a database. A 3D GrabCut algorithm is used for final organ segmentation by iteratively estimating the CT number distributions of the target organ and backgrounds using a graph-cuts algorithm. We applied this approach to localize and segment twelve major organ and tissue regions independently based on a database that includes 1300 torso CT scans. In our experiments, we randomly selected numerous CT scans and manually input nine principal types of inner organ regions for performance evaluation. Preliminary results showed the feasibility and efficiency of the proposed approach for addressing automatic organ segmentation issues on CT images.

  2. A fully automated method for quantifying and localizing white matter hyperintensities on MR images.

    PubMed

    Wu, Minjie; Rosano, Caterina; Butters, Meryl; Whyte, Ellen; Nable, Megan; Crooks, Ryan; Meltzer, Carolyn C; Reynolds, Charles F; Aizenstein, Howard J

    2006-12-01

    White matter hyperintensities (WMH), commonly found on T2-weighted FLAIR brain MR images in the elderly, are associated with a number of neuropsychiatric disorders, including vascular dementia, Alzheimer's disease, and late-life depression. Previous MRI studies of WMHs have primarily relied on the subjective and global (i.e., full-brain) ratings of WMH grade. In the current study we implement and validate an automated method for quantifying and localizing WMHs. We adapt a fuzzy-connected algorithm to automate the segmentation of WMHs and use a demons-based image registration to automate the anatomic localization of the WMHs using the Johns Hopkins University White Matter Atlas. The method is validated using the brain MR images acquired from eleven elderly subjects with late-onset late-life depression (LLD) and eight elderly controls. This dataset was chosen because LLD subjects are known to have significant WMH burden. The volumes of WMH identified in our automated method are compared with the accepted gold standard (manual ratings). A significant correlation of the automated method and the manual ratings is found (P<0.0001), thus demonstrating similar WMH quantifications of both methods. As has been shown in other studies (e.g. [Taylor, W.D., MacFall, J.R., Steffens, D.C., Payne, M.E., Provenzale, J.M., Krishnan, K.R., 2003. Localization of age-associated white matter hyperintensities in late-life depression. Progress in Neuro-Psychopharmacology and Biological Psychiatry. 27 (3), 539-544.]), we found there was a significantly greater WMH burden in the LLD subjects versus the controls for both the manual and automated method. The effect size was greater for the automated method, suggesting that it is a more specific measure. Additionally, we describe the anatomic localization of the WMHs in LLD subjects as well as in the control subjects, and detect the regions of interest (ROIs) specific for the WMH burden of LLD patients. Given the emergence of large Neuro

  3. Direct Localization Algorithm of White-light Interferogram Center Based on the Weighted Integral Method

    NASA Astrophysics Data System (ADS)

    Sato, Seichi; Kurihara, Toru; Ando, Shigeru

    This paper proposes an exact direct method to determine all parameters including an envelope peak of the white-light interferogram. A novel mathematical technique, the weighted integral method (WIM), is applied that starts from the characteristic differential equation of the target signal, interferogram in this paper, to obtain the algebraic relation among the finite-interval weighted integrals (observations) of the signal and the waveform parameters (unknowns). We implemented this method using FFT and examined through various numerical simulations. The results show the method is able to localize the envelope peak very accurately even if it is not included in the observed interval. The performance comparisons reveal the superiority of the proposed algorithm over conventional algorithms in all terms of accuracy, efficiency, and estimation range.

  4. A batch sliding window method for local singularity mapping and its application for geochemical anomaly identification

    NASA Astrophysics Data System (ADS)

    Xiao, Fan; Chen, Zhijun; Chen, Jianguo; Zhou, Yongzhang

    2016-05-01

    In this study, a novel batch sliding window (BSW) based singularity mapping approach was proposed. Compared to the traditional sliding window (SW) technique with disadvantages of the empirical predetermination of a fixed maximum window size and outliers sensitivity of least-squares (LS) linear regression method, the BSW based singularity mapping approach can automatically determine the optimal size of the largest window for each estimated position, and utilizes robust linear regression (RLR) which is insensitive to outlier values. In the case study, tin geochemical data in Gejiu, Yunnan, have been processed by BSW based singularity mapping approach. The results show that the BSW approach can improve the accuracy of the calculation of singularity exponent values due to the determination of the optimal maximum window size. The utilization of RLR method in the BSW approach can smoothen the distribution of singularity index values with few or even without much high fluctuate values looking like noise points that usually make a singularity map much roughly and discontinuously. Furthermore, the student's t-statistic diagram indicates a strong spatial correlation between high geochemical anomaly and known tin polymetallic deposits. The target areas within high tin geochemical anomaly could probably have much higher potential for the exploration of new tin polymetallic deposits than other areas, particularly for the areas that show strong tin geochemical anomalies whereas no tin polymetallic deposits have been found in them.

  5. A quantitative microscopic approach to predict local recurrence based on in vivo intraoperative imaging of sarcoma tumor margins

    PubMed Central

    Mueller, Jenna L.; Fu, Henry L.; Mito, Jeffrey K.; Whitley, Melodi J.; Chitalia, Rhea; Erkanli, Alaattin; Dodd, Leslie; Cardona, Diana M.; Geradts, Joseph; Willett, Rebecca M.; Kirsch, David G.; Ramanujam, Nimmi

    2015-01-01

    The goal of resection of soft tissue sarcomas located in the extremity is to preserve limb function while completely excising the tumor with a margin of normal tissue. With surgery alone, one-third of patients with soft tissue sarcoma of the extremity will have local recurrence due to microscopic residual disease in the tumor bed. Currently, a limited number of intraoperative pathology-based techniques are used to assess margin status; however, few have been widely adopted due to sampling error and time constraints. To aid in intraoperative diagnosis, we developed a quantitative optical microscopy toolbox, which includes acriflavine staining, fluorescence microscopy, and analytic techniques called sparse component analysis and circle transform to yield quantitative diagnosis of tumor margins. A series of variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. For comparison, if pathology was used to predict local recurrence in this data set, it would achieve a sensitivity of 29% and a specificity of 71%. These results indicate a robust approach for detecting microscopic residual disease, which is an effective predictor of local recurrence. PMID:25994353

  6. A quantitative microscopic approach to predict local recurrence based on in vivo intraoperative imaging of sarcoma tumor margins.

    PubMed

    Mueller, Jenna L; Fu, Henry L; Mito, Jeffrey K; Whitley, Melodi J; Chitalia, Rhea; Erkanli, Alaattin; Dodd, Leslie; Cardona, Diana M; Geradts, Joseph; Willett, Rebecca M; Kirsch, David G; Ramanujam, Nimmi

    2015-11-15

    The goal of resection of soft tissue sarcomas located in the extremity is to preserve limb function while completely excising the tumor with a margin of normal tissue. With surgery alone, one-third of patients with soft tissue sarcoma of the extremity will have local recurrence due to microscopic residual disease in the tumor bed. Currently, a limited number of intraoperative pathology-based techniques are used to assess margin status; however, few have been widely adopted due to sampling error and time constraints. To aid in intraoperative diagnosis, we developed a quantitative optical microscopy toolbox, which includes acriflavine staining, fluorescence microscopy, and analytic techniques called sparse component analysis and circle transform to yield quantitative diagnosis of tumor margins. A series of variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82 and 75%. The utility of this approach was tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78 and 82%. For comparison, if pathology was used to predict local recurrence in this data set, it would achieve a sensitivity of 29% and a specificity of 71%. These results indicate a robust approach for detecting microscopic residual disease, which is an effective predictor of local recurrence.

  7. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

    PubMed

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-07-03

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.