A local damage detection approach based on restoring force method
NASA Astrophysics Data System (ADS)
Zhan, Chao; Li, Dongsheng; Li, Hongnan
2014-09-01
Chain-like systems have been studied by many researchers for their simple structure and wide range of application. Previously, the damage in a chain-like system was detected by the reduction of the mass-normalized stiffness coefficient for certain elements as reported by Nayeri et al. (2008 [16]). However, some shortcomings exist in that approach and for overcoming them; an improved approach is derived and presented in this paper. In our improved approach, the mass normalized stiffness coefficients under two states (baseline state and potentially damaged state) are first estimated by a least square method, then these mass-stiffness coupled coefficients are decoupled to derive stiffness and mass relative change ratios for individual elements. These ratios are assembled in a vector, which is defined as damage indication vector (DIV). Each component in DIV is normalized individually to one to get multiple solutions. These solutions are averaged for estimating relative system changes, while abnormal solutions are discarded. The work of judging a solution as normal or abnormal is done by a cluster analysis algorithm. The most intriguing merit of this improved approach is that the relative stiffness and mass changes, which are coupled in the previous approach, can be separately identified. By this approach, the damage (single or multiple) extent and location can be correctly detected under operational conditions, meanwhile the proposed damage index has a clear physical meaning and is directly related to the stiffness reduction of corresponding structural elements. For illustrating the effectiveness and robustness of the improved approach, numerical simulation of a four floor building was carried out and experimental data from a structure tested at the Los Alamos National Laboratory was employed. Identified structural changes with both simulation and experimental data properly indicated the location and extent of actual structural damage, which validated the proposed
A Local Coordinate Approach in the MLPG Method for Beam Problems
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Phillips, Dawn R.
2002-01-01
System matrices for Euler-Bernoulli beam problems for the meshless local Petrov-Galerkin (MLPG) method deteriorate as the number of nodes in the beam models are consistently increased. The reason for this behavior is explained. To overcome this difficulty and improve the accuracy of the solutions, a local coordinate approach for the evaluation of the generalized moving least squares shape functions and their derivatives is proposed. The proposed approach retains the accuracy of the MLPG methods.
NASA Astrophysics Data System (ADS)
Martel, Dimitri; Tse Ve Koon, K.; Le Fur, Yann; Ratiney, Hélène
2015-11-01
Two-dimensional spectroscopy offers the possibility to unambiguously distinguish metabolites by spreading out the multiplet structure of J-coupled spin systems into a second dimension. Quantification methods that perform parametric fitting of the 2D MRS signal have recently been proposed for resolved PRESS (JPRESS) but not explicitly for Localized Correlation Spectroscopy (LCOSY). Here, through a whole metabolite quantification approach, correlation spectroscopy quantification performances are studied. The ability to quantify metabolite relaxation constant times is studied for three localized 2D MRS sequences (LCOSY, LCTCOSY and the JPRESS) in vitro on preclinical MR systems. The issues encountered during implementation and quantification strategies are discussed with the help of the Fisher matrix formalism. The described parameterized models enable the computation of the lower bound for error variance - generally known as the Cramér Rao bounds (CRBs), a standard of precision - on the parameters estimated from these 2D MRS signal fittings. LCOSY has a theoretical net signal loss of two per unit of acquisition time compared to JPRESS. A rapid analysis could point that the relative CRBs of LCOSY compared to JPRESS (expressed as a percentage of the concentration values) should be doubled but we show that this is not necessarily true. Finally, the LCOSY quantification procedure has been applied on data acquired in vivo on a mouse brain.
NASA Astrophysics Data System (ADS)
Neese, Frank; Wennmohs, Frank; Hansen, Andreas
2009-03-01
Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Møller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol-1. Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500
Mizutani, Yasuyoshi; Shiogama, Kazuya; Onouchi, Takanori; Sakurai, Kouhei; Inada, Ken-ichi; Tsutsumi, Yutaka
2016-01-01
In chronic inflammatory lesions of autoimmune and infectious diseases, plasma cells are frequently observed. Antigens recognized by antibodies produced by the plasma cells mostly remain unclear. A new technique identifying these corresponding antigens may give us a breakthrough for understanding the disease from a pathophysiological viewpoint, simply because the immunocytes are seen within the lesion. We have developed an enzyme-labeled antigen method for microscopic identification of the antigen recognized by specific antibodies locally produced in plasma cells in inflammatory lesions. Firstly, target biotinylated antigens were constructed by the wheat germ cell-free protein synthesis system or through chemical biotinylation. Next, proteins reactive to antibodies in tissue extracts were screened and antibody titers were evaluated by the AlphaScreen method. Finally, with the enzyme-labeled antigen method using the biotinylated antigens as probes, plasma cells producing specific antibodies were microscopically localized in fixed frozen sections. Our novel approach visualized tissue plasma cells that produced 1) autoantibodies in rheumatoid arthritis, 2) antibodies against major antigens of Porphyromonas gingivalis in periodontitis or radicular cyst, and 3) antibodies against a carbohydrate antigen, Strep A, of Streptococcus pyogenes in recurrent tonsillitis. Evaluation of local specific antibody responses expectedly contributes to clarifying previously unknown processes in inflammatory disorders. PMID:27006517
Speeding up local correlation methods
NASA Astrophysics Data System (ADS)
Kats, Daniel
2014-12-01
We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.
Speeding up local correlation methods
Kats, Daniel
2014-12-28
We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.
Time Discretization Approach to Dynamic Localization Conditions
NASA Astrophysics Data System (ADS)
Papp, E.
An alternative wavefunction to the description of the dynamic localization of a charged particle moving on a one-dimensional lattice under the influence of a periodic time dependent electric field is written down. For this purpose the method of characteristics such as applied by Dunlap and Kenkre [Phys. Rev. B 34, 3625 (1986)] has been modified by using a different integration variable. Handling this wavefunction one is faced with the selection of admissible time values. This results in a conditionally exactly solvable problem, now by accounting specifically for the implementation of a time discretization working in conjunction with a related dynamic localization condition. In addition, one resorts to the strong field limit, which amounts to replace, to leading order, the large order zeros of the Bessel function J0(z), used before in connection with the cosinusoidal modulation, by integral multiples of π. Here z stands for the ratio between the field amplitude and the frequency. The modulation function of the electric field vanishes on the nodal points of the time grid, which stands for an effective field-free behavior. This opens the way to propose quickly tractable dynamic localization conditions for arbitrary periodic modulations. We have also found that the present time discretization approach produces the minimization of the mean square displacement characterizing the usual exact wavefunction. Other realizations and comparisons have also been presented.
A Local Approach to Hybrid Data Assimilation
NASA Astrophysics Data System (ADS)
Ide, K.; Kleist, D. T.
2014-12-01
A hybrid system with the local formulation is developed where the prior probability density function (pdf) consists of both statistical and dynamic information of the uncertainty. The dynamic information is provided by the monte caro approach. The local formulation is solved at every grid as in the local ensemble transform Kalman filter. The formulation is flexible in that it allows not only Gaussian but other stable pdfs.
Computational methods for global/local analysis
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.
NASA Astrophysics Data System (ADS)
Travelletti, Julien; Samyn, Kevin; Malet, Jean-Philippe; Grandjean, Gilles; Jaboyedoff, Michel
2010-05-01
A challenge to progress in the understanding of landslides is to precisely define their 3D geometry and structure as an input for volume estimation and further hydro-mechanical modelling. The objective of this work is to present a multidisciplinary approach to the geometrical modelling of the La Valette landslide by integrating seismic tomography survey (P and S wave) and high resolution LiDar data with the Sloping Local Base Level (SLBL) method. The La Valette landslide, triggered in March 1982, is one of the most important slope instability in the South French Alps. Its dimensions are 1380 m length and 290 m width, and the total volume is estimated at 3.5 106 m3. Since 2002, an important activity of the upper part of the landslide is observed, and consisted mainly in the retrogression of the crown through the opening of an important fracture over several meters and rotational slumps. The failed mass is currently loading the upper part of the mudslide and is a potential threat for the 170 residential communities. A seismic tomography survey combined to airborne and terrestrial LiDar data analysis have been carried out to identify the geological structures and discontinuities and characterize the stability of the failing mass. Seismic tomography allows direct and non-intrusive measurements of P and S waves velocities which are key parameters for the analysis of the mechanical properties of reworked and highly fissured masses. 4 seismic lines have been performed (3 of them in the direction of the slope and the other perpendicular). The 2 longest devices are composed of 24 geophones spaced by 5 meters and have a sufficient investigation depth for a large scale characterization of the landslide's structure with depth. The 2 shortest devices, composed of 24 geophones spaced by 2 meters bring information about the fracturing degree between the moving material of the landslide and the competent rock. 100gr of pentrite for each shot were used as seismic sources. The
Local electric dipole moments: A generalized approach.
Groß, Lynn; Herrmann, Carmen
2016-09-30
We present an approach for calculating local electric dipole moments for fragments of molecular or supramolecular systems. This is important for understanding chemical gating and solvent effects in nanoelectronics, atomic force microscopy, and intensities in infrared spectroscopy. Owing to the nonzero partial charge of most fragments, "naively" defined local dipole moments are origin-dependent. Inspired by previous work based on Bader's atoms-in-molecules (AIM) partitioning, we derive a definition of fragment dipole moments which achieves origin-independence by relying on internal reference points. Instead of bond critical points (BCPs) as in existing approaches, we use as few reference points as possible, which are located between the fragment and the remainder(s) of the system and may be chosen based on chemical intuition. This allows our approach to be used with AIM implementations that circumvent the calculation of critical points for reasons of computational efficiency, for cases where no BCPs are found due to large interfragment distances, and with local partitioning schemes other than AIM which do not provide BCPs. It is applicable to both covalently and noncovalently bound systems. © 2016 Wiley Periodicals, Inc. PMID:27520590
Method for localizing heating in tumor tissue
Doss, James D.; McCabe, Charles W.
1977-04-12
A method for a localized tissue heating of tumors is disclosed. Localized radio frequency current fields are produced with specific electrode configurations. Several electrode configurations are disclosed, enabling variations in electrical and thermal properties of tissues to be exploited.
LOCALIZING THE RANGELAND HEALTH METHOD FOR SOUTHEASTERN ARIZONA
The interagency manual Interpreting Indicators of Rangeland Health, Version 4 (Technical Reference 1734-6) provides a method for making rangeland health assessments. The manual recommends that the rangeland health assessment approach be adapted to local conditions. This technica...
Methods and strategies of object localization
NASA Technical Reports Server (NTRS)
Shao, Lejun; Volz, Richard A.
1989-01-01
An important property of an intelligent robot is to be able to determine the location of an object in 3-D space. A general object localization system structure is proposed, some important issues on localization discussed, and an overview given for current available object localization algorithms and systems. The algorithms reviewed are characterized by their feature extracting and matching strategies; the range finding methods; the types of locatable objects; and the mathematical formulating methods.
Local Physical Coordinates from Symplectic Projector Method
NASA Astrophysics Data System (ADS)
de Andrade, M. A.; Santos, M. A.; Vancea, I. V.
The basic arguments underlying the symplectic projector method are presented. By this method, local free coordinates on the constraint surface can be obtained for a broader class of constrained systems. Some interesting examples are analyzed.
Approaches to local climate action in Colorado
NASA Astrophysics Data System (ADS)
Huang, Y. D.
2011-12-01
Though climate change is a global problem, the impacts are felt on the local scale; it follows that the solutions must come at the local level. Fortunately, many cities and municipalities are implementing climate mitigation (or climate action) policies and programs. However, they face many procedural and institutional barriers to their efforts, such of lack of expertise or data, limited human and financial resources, and lack of community engagement (Krause 2011). To address the first obstacle, thirteen in-depth case studies were done of successful model practices ("best practices") of climate action programs carried out by various cities, counties, and organizations in Colorado, and one outside Colorado, and developed into "how-to guides" for other municipalities to use. Research was conducted by reading documents (e.g. annual reports, community guides, city websites), email correspondence with program managers and city officials, and via phone interviews. The information gathered was then compiled into a series of reports containing a narrative description of the initiative; an overview of the plan elements (target audience and goals); implementation strategies and any indicators of success to date (e.g. GHG emissions reductions, cost savings); and the adoption or approval process, as well as community engagement efforts and marketing or messaging strategies. The types of programs covered were energy action plans, energy efficiency programs, renewable energy programs, and transportation and land use programs. Between the thirteen case studies, there was a range of approaches to implementing local climate action programs, examined along two dimensions: focus on climate change (whether it was direct/explicit or indirect/implicit) and extent of government authority. This benchmarking exercise affirmed the conventional wisdom propounded by Pitt (2010), that peer pressure (that is, the presence of neighboring jurisdictions with climate initiatives), the level of
A Localization Method for Multistatic SAR Based on Convex Optimization
2015-01-01
In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031
A Localization Method for Multistatic SAR Based on Convex Optimization.
Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu
2015-01-01
In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031
A novel eye localization method with rotation invariance.
Ren, Yan; Wang, Shuang; Hou, Biao; Ma, Jingjing
2014-01-01
This paper presents a novel learning method for precise eye localization, a challenge to be solved in order to improve the performance of face processing algorithms. Few existing approaches can directly detect and localize eyes with arbitrary angels in predicted eye regions, face images, and original portraits at the same time. To preserve rotation invariant property throughout the entire eye localization framework, a codebook of invariant local features is proposed for the representation of eye patterns. A heat map is then generated by integrating a 2-class sparse representation classifier with a pyramid-like detecting and locating strategy to fulfill the task of discriminative classification and precise localization. Furthermore, a series of prior information is adopted to improve the localization precision and accuracy. Experimental results on three different databases show that our method is capable of effectively locating eyes in arbitrary rotation situations (360° in plane). PMID:24184729
Emergency local searching approach for job shop scheduling
NASA Astrophysics Data System (ADS)
Zhao, Ning; Chen, Siyu; Du, Yanhua
2013-09-01
Existing methods of local search mostly focus on how to reach optimal solution. However, in some emergency situations, search time is the hard constraint for job shop scheduling problem while optimal solution is not necessary. In this situation, the existing method of local search is not fast enough. This paper presents an emergency local search(ELS) approach which can reach feasible and nearly optimal solution in limited search time. The ELS approach is desirable for the aforementioned emergency situations where search time is limited and a nearly optimal solution is sufficient, which consists of three phases. Firstly, in order to reach a feasible and nearly optimal solution, infeasible solutions are repaired and a repair technique named group repair is proposed. Secondly, in order to save time, the amount of local search moves need to be reduced and this is achieved by a quickly search method named critical path search(CPS). Finally, CPS sometimes stops at a solution far from the optimal one. In order to jump out the search dilemma of CPS, a jump technique based on critical part is used to improve CPS. Furthermore, the schedule system based on ELS has been developed and experiments based on this system completed on the computer of Intel Pentium(R) 2.93 GHz. The experimental result shows that the optimal solutions of small scale instances are reached in 2 s, and the nearly optimal solutions of large scale instances are reached in 4 s. The proposed ELS approach can stably reach nearly optimal solutions with manageable search time, and can be applied on some emergency situations.
Local participation in natural resource monitoring: a characterization of approaches.
Danielsen, Finn; Burgess, Neil D; Balmford, Andrew; Donald, Paul F; Funder, Mikkel; Jones, Julia P G; Alviola, Philip; Balete, Danilo S; Blomley, Tom; Brashares, Justin; Child, Brian; Enghoff, Martin; Fjeldså, Jon; Holt, Sune; Hübertz, Hanne; Jensen, Arne E; Jensen, Per M; Massao, John; Mendoza, Marlynn M; Ngaga, Yonika; Poulsen, Michael K; Rueda, Ricardo; Sam, Moses; Skielboe, Thomas; Stuart-Hill, Greg; Topp-Jørgensen, Elmer; Yonten, Deki
2009-02-01
The monitoring of trends in the status of species or habitats is routine in developed countries, where it is funded by the state or large nongovernmental organizations and often involves large numbers of skilled amateur volunteers. Far less monitoring of natural resources takes place in developing countries, where state agencies have small budgets, there are fewer skilled professionals or amateurs, and socioeconomic conditions prevent development of a culture of volunteerism. The resulting lack of knowledge about trends in species and habitats presents a serious challenge for detecting, understanding, and reversing declines in natural resource values. International environmental agreements require signatories undertake systematic monitoring of their natural resources, but no system exists to guide the development and expansion of monitoring schemes. To help develop such a protocol, we suggest a typology of monitoring categories, defined by their degree of local participation, ranging from no local involvement with monitoring undertaken by professional researchers to an entirely local effort with monitoring undertaken by local people. We assessed the strengths and weaknesses of each monitoring category and the potential of each to be sustainable in developed or developing countries. Locally based monitoring is particularly relevant in developing countries, where it can lead to rapid decisions to solve the key threats affecting natural resources, can empower local communities to better manage their resources, and can refine sustainable-use strategies to improve local livelihoods. Nevertheless, we recognize that the accuracy and precision of the monitoring undertaken by local communities in different situations needs further study and field protocols need to be further developed to get the best from the unrealized potential of this approach. A challenge to conservation biologists is to identify and establish the monitoring system most relevant to a particular
P37: Locally advanced thymoma-robotic approach
Asaf, Belal B.; Kumar, Arvind
2015-01-01
Background The conventional approach to locally advanced thymoma has been via a sternotomy. VATS and robotic thymectomies have been described but typically are reserved for patients with myasthenia gravis only or for small, encapsulated thymic tumors. There have been few reports of minimally invasive resection of locally advanced thymomas. Our objective is to present a case in which a large, locally advanced thymoma was resected en bloc with the pericardium employing robotic assisted thoracoscopic approach. Methods This case illustrates a case of an asymptomatic 29-year-old female found to have an 11 cm anterior mediastinal mass on CT scan. A right-sided, 4 port robotic approach was utilized with the camera port in the 5th intercostal space anterior axillary line and two accessory ports for robotic arm 1 and 2 in the 3rd intercostal space anterior axillary line and 8th intercostal space anterior axillary line. A 5 mm port was used between the camera and 2nd robotic arm for assistance. On exploration the mass was found to be adherent to the pericardium that was resected en bloc via anterior pericardiectomy. Her post-operative course was uncomplicated, and she was discharged home on postoperative day 1. Results Final pathology revealed an 11 cm × 7.5 cm × 3.0 cm WHO class B2 thymoma invading the pericardium, TNM stage T3N0M0, with negative margins. The patient was subsequently sent to receive 5,040 cGy of adjuvant radiation, and follow-up CT scan 6 months postoperatively showed no evidence of disease. Conclusions Very little data exist demonstrating the efficacy of resecting locally advanced thymomas utilising the minimally invasive approach. Our case demonstrates that a robotic assisted thoracoscopic approach is feasible for performing thymectomy for locally advanced thymomas. This may help limit the morbidity of a trans-sternal approach while achieving comparable oncologic results. However, further studies are needed to evaluate its efficacy and long term
Enhanced Methods for Local Ancestry Assignment in Sequenced Admixed Individuals
Brown, Robert; Pasaniuc, Bogdan
2014-01-01
Inferring the ancestry at each locus in the genome of recently admixed individuals (e.g., Latino Americans) plays a major role in medical and population genetic inferences, ranging from finding disease-risk loci, to inferring recombination rates, to mapping missing contigs in the human genome. Although many methods for local ancestry inference have been proposed, most are designed for use with genotyping arrays and fail to make use of the full spectrum of data available from sequencing. In addition, current haplotype-based approaches are very computationally demanding, requiring large computational time for moderately large sample sizes. Here we present new methods for local ancestry inference that leverage continent-specific variants (CSVs) to attain increased performance over existing approaches in sequenced admixed genomes. A key feature of our approach is that it incorporates the admixed genomes themselves jointly with public datasets, such as 1000 Genomes, to improve the accuracy of CSV calling. We use simulations to show that our approach attains accuracy similar to widely used computationally intensive haplotype-based approaches with large decreases in runtime. Most importantly, we show that our method recovers comparable local ancestries, as the 1000 Genomes consensus local ancestry calls in the real admixed individuals from the 1000 Genomes Project. We extend our approach to account for low-coverage sequencing and show that accurate local ancestry inference can be attained at low sequencing coverage. Finally, we generalize CSVs to sub-continental population-specific variants (sCSVs) and show that in some cases it is possible to determine the sub-continental ancestry for short chromosomal segments on the basis of sCSVs. PMID:24743331
Optic disk localization by a robust fusion method
NASA Astrophysics Data System (ADS)
Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin
2013-02-01
The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.
System and method for object localization
NASA Technical Reports Server (NTRS)
Kelly, Alonzo J. (Inventor); Zhong, Yu (Inventor)
2005-01-01
A computer-assisted method for localizing a rack, including sensing an image of the rack, detecting line segments in the sensed image, recognizing a candidate arrangement of line segments in the sensed image indicative of a predetermined feature of the rack, generating a matrix of correspondence between the candidate arrangement of line segments and an expected position and orientation of the predetermined feature of the rack, and estimating a position and orientation of the rack based on the matrix of correspondence.
Global/local methods for probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.
1993-01-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Ultraviolet C irradiation: an alternative antimicrobial approach to localized infections?
Dai, Tianhong; Vrahas, Mark S; Murray, Clinton K; Hamblin, Michael R
2012-01-01
This review discusses the potential of ultraviolet C (UVC) irradiation as an alternative approach to current methods used to treat localized infections. It has been reported that multidrug-resistant microorganisms are equally sensitive to UVC irradiation as their wild-type counterparts. With appropriate doses, UVC may selectively inactivate microorganisms while preserving viability of mammalian cells and, moreover, is reported to promote wound healing. UVC is also found in animal studies to be less damaging to tissue than UVB. Even though UVC may produce DNA damage in mammalian cells, it can be rapidly repaired by DNA repair enzymes. If UVC irradiation is repeated excessively, resistance of microorganisms to UVC inactivation may develop. In summary, UVC should be investigated as an alternative approach to current methods used to treat localized infections, especially those caused by multidrug-resistant microorganisms. UVC should be used in a manner such that the side effects would be minimized and resistance of microorganisms to UVC would be avoided. PMID:22339192
Meshless Local Petrov-Galerkin Method for Bending Problems
NASA Technical Reports Server (NTRS)
Phillips, Dawn R.; Raju, Ivatury S.
2002-01-01
Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.
Invariant current approach to wave propagation in locally symmetric structures
NASA Astrophysics Data System (ADS)
Zampetakis, V. E.; Diakonou, M. K.; Morfonios, C. V.; Kalozoumis, P. A.; Diakonos, F. K.; Schmelcher, P.
2016-05-01
A theory for wave mechanical systems with local inversion and translation symmetries is developed employing the two-dimensional solution space of the stationary Schrödinger equation. The local symmetries of the potential are encoded into corresponding local basis vectors in terms of symmetry-induced two-point invariant currents which map the basis amplitudes between symmetry-related points. A universal wavefunction structure in locally symmetric potentials is revealed, independently of the physical boundary conditions, by using special local bases which are adapted to the existing local symmetries. The local symmetry bases enable efficient computation of spatially resolved wave amplitudes in systems with arbitrary combinations of local inversion and translation symmetries. The approach opens the perspective of a flexible analysis and control of wave localization in structurally complex systems.
Fehler, M.C.; Huang, L.-J.
1998-12-10
During the past few years, there has been interest in developing migration and forward modeling approaches that are both fast and reliable particularly in regions that have rapid spatial variations in structure. The authors have been investigating a suite of modeling and migration methods that are implemented in the wavenumber-space domains and operate on data in the frequency domain. The best known example of these methods is the split-step Fourier method (SSF). Two of the methods that the authors have developed are the extended local Born Fourier (ELBF) approach and the extended local Rytov Fourier (ELRF) approach. Both methods are based on solutions of the scalar (constant density) wave equation, are computationally fast and can reliably model effects of both deterministic and random structures. The authors have investigated their reliability for migrating both 2D synthetic data and real 2D field data. The authors have found that the methods give images that are better than those that can be obtained using other methods like the SSF and Kirchhoff migration approaches. More recently, the authors have developed an approach for solving the acoustic (variable density) wave equation and have begun to investigate its applicability for modeling one-way wave propagation. The methods will be introduced and their ability to model seismic wave propagation and migrate seismic data will be investigated. The authors will also investigate their capability to model forward wave propagation through random media and to image zones of small scale heterogeneity such as those associated with zones of high permeability.
Improving mobile robot localization: grid-based approach
NASA Astrophysics Data System (ADS)
Yan, Junchi
2012-02-01
Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.
Sensitivity analysis for nonrandom dropout: a local influence approach.
Verbeke, G; Molenberghs, G; Thijs, H; Lesaffre, E; Kenward, M G
2001-03-01
Diggle and Kenward (1994, Applied Statistics 43, 49-93) proposed a selection model for continuous longitudinal data subject to nonrandom dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumptions on which this type of model invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. This paper presents a formal and flexible approach to such a sensitivity assessment based on local influence (Cook, 1986, Journal of the Royal Statistical Society, Series B 48, 133-169). The influence of perturbing a missing-at-random dropout model in the direction of nonrandom dropout is explored. The method is applied to data from a randomized experiment on the inhibition of testosterone production in rats. PMID:11252620
Using the Storypath Approach to Make Local Government Understandable
ERIC Educational Resources Information Center
McGuire, Margit E.; Cole, Bronwyn
2008-01-01
Learning about local government seems boring and irrelevant to most young people, particularly to students from high-poverty backgrounds. The authors explore a promising approach for solving this problem, Storypath, which engages students in authentic learning and active citizenship. The Storypath approach is based on a narrative in which students…
A PDE-Based Fast Local Level Set Method
NASA Astrophysics Data System (ADS)
Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo
1999-11-01
We develop a fast method to localize the level set method of Osher and Sethian (1988, J. Comput. Phys.79, 12) and address two important issues that are intrinsic to the level set method: (a) how to extend a quantity that is given only on the interface to a neighborhood of the interface; (b) how to reset the level set function to be a signed distance function to the interface efficiently without appreciably moving the interface. This fast local level set method reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement. Our approach differs from previous related works in that we extract all the information needed from the level set function (or functions in multiphase flow) and do not need to find explicitly the location of the interface in the space domain. The complexity of our method to do tasks such as extension and distance reinitialization is O(N), where N is the number of points in space, not O(N log N) as in works by Sethian (1996, Proc. Nat. Acad. Sci. 93, 1591) and Helmsen and co-workers (1996, SPIE Microlithography IX, p. 253). This complexity estimation is also valid for quite general geometrically based front motion for our localized method.
Locally Compact Quantum Groups. A von Neumann Algebra Approach
NASA Astrophysics Data System (ADS)
Van Daele, Alfons
2014-08-01
In this paper, we give an alternative approach to the theory of locally compact quantum groups, as developed by Kustermans and Vaes. We start with a von Neumann algebra and a comultiplication on this von Neumann algebra. We assume that there exist faithful left and right Haar weights. Then we develop the theory within this von Neumann algebra setting. In [Math. Scand. 92 (2003), 68-92] locally compact quantum groups are also studied in the von Neumann algebraic context. This approach is independent of the original C^*-algebraic approach in the sense that the earlier results are not used. However, this paper is not really independent because for many proofs, the reader is referred to the original paper where the C^*-version is developed. In this paper, we give a completely self-contained approach. Moreover, at various points, we do things differently. We have a different treatment of the antipode. It is similar to the original treatment in [Ann. Sci. & #201;cole Norm. Sup. (4) 33 (2000), 837-934]. But together with the fact that we work in the von Neumann algebra framework, it allows us to use an idea from [Rev. Roumaine Math. Pures Appl. 21 (1976), 1411-1449] to obtain the uniqueness of the Haar weights in an early stage. We take advantage of this fact when deriving the other main results in the theory. We also give a slightly different approach to duality. Finally, we collect, in a systematic way, several important formulas. In an appendix, we indicate very briefly how the C^*-approach and the von Neumann algebra approach eventually yield the same objects. The passage from the von Neumann algebra setting to the C^*-algebra setting is more or less standard. For the other direction, we use a new method. It is based on the observation that the Haar weights on the C^*-algebra extend to weights on the double dual with central support and that all these supports are the same. Of course, we get the von Neumann algebra by cutting down the double dual with this unique
Novel Approach to Job's Method.
ERIC Educational Resources Information Center
Hill, Zachary D.; Macarthy, Patrick
1986-01-01
Job's method of continuous variations is a commonly used procedure for determining the composition of complexes in solution. Presents: (1) a review of the method; (2) theory of a new procedure for measuring Job's plots; and (3) an undergraduate experiment using the new method. (JN)
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
An improved image deconvolution approach using local constraint
NASA Astrophysics Data System (ADS)
Zhao, Jufeng; Feng, Huajun; Xu, Zhihai; Li, Qi
2012-03-01
Conventional deblurring approaches such as the Richardson-Lucy (RL) algorithm will introduce strong noise and ringing artifacts, though the point spread function (PSF) is known. Since it is difficult to estimate an accurate PSF in real imaging system, the results of those algorithms will be worse. A spatial weight matrix (SWM) is adopted as local constraint, which is incorporated into image statistical prior to improve the RL approach. Experiments show that our approach can make a good balance between preserving image details and suppressing ringing artifacts and noise.
Electroablation: a method for neurectomy and localized tissue injury
2014-01-01
Background Tissue injury has been employed to study diverse biological processes such as regeneration and inflammation. In addition to physical or surgical based methods for tissue injury, current protocols for localized tissue damage include laser and two-photon wounding, which allow a high degree of accuracy, but are expensive and difficult to apply. In contrast, electrical injury is a simple and inexpensive technique, which allows reproducible and localized cell or tissue damage in a variety of contexts. Results We describe a novel technique that combines the advantages of zebrafish for in vivo visualization of cells with those of electrical injury methods in a simple and versatile protocol which allows the study of regeneration and inflammation. The source of the electrical pulse is a microelectrode that can be placed with precision adjacent to specific cells expressing fluorescent proteins. We demonstrate the use of this technique in zebrafish larvae by damaging different cell types and structures. Neurectomy can be carried out in peripheral nerves or in the spinal cord allowing the study of degeneration and regeneration of nerve fibers. We also apply this method for the ablation of single lateral line mechanosensory neuromasts, showing the utility of this approach as a tool for the study of organ regeneration. In addition, we show that electrical injury induces immune cell recruitment to damaged tissues, allowing in vivo studies of leukocyte dynamics during inflammation within a confined and localized injury. Finally, we show that it is possible to apply electroablation as a method of tissue injury and inflammation induction in adult fish. Conclusions Electrical injury using a fine microelectrode can be used for axotomy of neurons, as a general tissue ablation tool and as a method to induce a powerful inflammatory response. We demonstrate its utility to studies in both larvae and in adult zebrafish but we expect that this technique can be readily applied to
Quadratic function approaching method for magnetotelluric soundingdata inversion
Liangjun, Yan; Wenbao, Hu; Zhang, Keni
2004-04-05
The quadratic function approaching method (QFAM) is introduced for magnetotelluric sounding (MT) data inversion. The method takes the advantage of that quadratic function has single extreme value, which avoids leading to an inversion solution for local minimum and ensures the solution for global minimization of an objective function. The method does not need calculation of sensitivity matrix and not require a strict initial earth model. Examples for synthetic data and field measurement data indicate that the proposed inversion method is effective.
Performance of FFT methods in local gravity field modelling
NASA Technical Reports Server (NTRS)
Forsberg, Rene; Solheim, Dag
1989-01-01
Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.
The Local Variational Multiscale Method for Turbulence Simulation.
Collis, Samuel Scott; Ramakrishnan, Srinivas
2005-05-01
Accurate and efficient turbulence simulation in complex geometries is a formidable chal-lenge. Traditional methods are often limited by low accuracy and/or restrictions to simplegeometries. We explore the merger of Discontinuous Galerkin (DG) spatial discretizationswith Variational Multi-Scale (VMS) modeling, termed Local VMS (LVMS), to overcomethese limitations. DG spatial discretizations support arbitrarily high-order accuracy on un-structured grids amenable for complex geometries. Furthermore, high-order, hierarchicalrepresentation within DG provides a natural framework fora prioriscale separation crucialfor VMS implementation. We show that the combined benefits of DG and VMS within theLVMS method leads to promising new approach to LES for use in complex geometries.The efficacy of LVMS for turbulence simulation is assessed by application to fully-developed turbulent channelflow. First, a detailed spatial resolution study is undertakento record the effects of the DG discretization on turbulence statistics. Here, the localhp[?]refinement capabilites of DG are exploited to obtain reliable low-order statistics effi-ciently. Likewise, resolution guidelines for simulating wall-bounded turbulence using DGare established. We also explore the influence of enforcing Dirichlet boundary conditionsindirectly through numericalfluxes in DG which allows the solution to jump (slip) at thechannel walls. These jumps are effective in simulating the influence of the wall commen-surate with the local resolution and this feature of DG is effective in mitigating near-wallresolution requirements. In particular, we show that by locally modifying the numericalviscousflux used at the wall, we are able to regulate the near-wall slip through a penaltythat leads to improved shear-stress predictions. This work, demonstrates the potential ofthe numerical viscousflux to act as a numerically consistent wall-model and this successwarrents future research.As in any high-order numerical method some
SubCellProt: predicting protein subcellular localization using machine learning approaches.
Garg, Prabha; Sharma, Virag; Chaudhari, Pradeep; Roy, Nilanjan
2009-01-01
High-throughput genome sequencing projects continue to churn out enormous amounts of raw sequence data. However, most of this raw sequence data is unannotated and, hence, not very useful. Among the various approaches to decipher the function of a protein, one is to determine its localization. Experimental approaches for proteome annotation including determination of a protein's subcellular localizations are very costly and labor intensive. Besides the available experimental methods, in silico methods present alternative approaches to accomplish this task. Here, we present two machine learning approaches for prediction of the subcellular localization of a protein from the primary sequence information. Two machine learning algorithms, k Nearest Neighbor (k-NN) and Probabilistic Neural Network (PNN) were used to classify an unknown protein into one of the 11 subcellular localizations. The final prediction is made on the basis of a consensus of the predictions made by two algorithms and a probability is assigned to it. The results indicate that the primary sequence derived features like amino acid composition, sequence order and physicochemical properties can be used to assign subcellular localization with a fair degree of accuracy. Moreover, with the enhanced accuracy of our approach and the definition of a prediction domain, this method can be used for proteome annotation in a high throughput manner. SubCellProt is available at www.databases.niper.ac.in/SubCellProt. PMID:19537160
Developmental differences in auditory detection and localization of approaching vehicles.
Barton, Benjamin K; Lew, Roger; Kovesdi, Casey; Cottrell, Nicholas D; Ulrich, Thomas
2013-04-01
Pedestrian safety is a significant problem in the United States, with thousands being injured each year. Multiple risk factors exist, but one poorly understood factor is pedestrians' ability to attend to vehicles using auditory cues. Auditory information in the pedestrian setting is increasing in importance with the growing number of quieter hybrid and all-electric vehicles on America's roadways that do not emit sound cues pedestrians expect from an approaching vehicle. Our study explored developmental differences in pedestrians' detection and localization of approaching vehicles. Fifty children ages 6-9 years, and 35 adults participated. Participants' performance varied significantly by age, and with increasing speed and direction of the vehicle's approach. Results underscore the importance of understanding children's and adults' use of auditory cues for pedestrian safety and highlight the need for further research. PMID:23357030
New Methods for Crafting Locally Decision-Relevant Scenarios
NASA Astrophysics Data System (ADS)
Lempert, R. J.
2015-12-01
Scenarios can play an important role in helping decision makers to imagine future worlds, both good and bad, different than the one with which we are familiar and to take concrete steps now to address the risks generated by climate change. At their best, scenarios can effectively represent deep uncertainty; integrate over multiple domains; and enable parties with different expectation and values to expand the range of futures they consider, to see the world from different points of view, and to grapple seriously with the potential implications of surprising or inconvenient futures. These attributes of scenario processes can prove crucial in helping craft effective responses to climate change. But traditional scenario methods can also fail to overcome difficulties related to choosing, communicating, and using scenarios to identify, evaluate, and reach consensus on appropriate policies. Such challenges can limit scenario's impact in broad public discourse. This talk will demonstrate how new decision support approaches can employ new quantitative tools that allow scenarios to emerge from a process of deliberation with analysis among stakeholders, rather than serve as inputs to it, thereby increasing the impacts of scenarios on decision making. This talk will demonstrate these methods in the design of a decision support tool to help residents of low lying coastal cities grapple with the long-term risks of sea level rise. In particular, this talk will show how information from the IPCC SSP's can be combined with local information to provide a rich set of locally decision-relevant information.
Dual mode stereotactic localization method and application
Keppel, Cynthia E.; Barbosa, Fernando Jorge; Majewski, Stanislaw
2002-01-01
The invention described herein combines the structural digital X-ray image provided by conventional stereotactic core biopsy instruments with the additional functional metabolic gamma imaging obtained with a dedicated compact gamma imaging mini-camera. Before the procedure, the patient is injected with an appropriate radiopharmaceutical. The radiopharmaceutical uptake distribution within the breast under compression in a conventional examination table expressed by the intensity of gamma emissions is obtained for comparison (co-registration) with the digital mammography (X-ray) image. This dual modality mode of operation greatly increases the functionality of existing stereotactic biopsy devices by yielding a much smaller number of false positives than would be produced using X-ray images alone. The ability to obtain both the X-ray mammographic image and the nuclear-based medicine gamma image using a single device is made possible largely through the use of a novel, small and movable gamma imaging camera that permits its incorporation into the same table or system as that currently utilized to obtain X-ray based mammographic images for localization of lesions.
Methods for spatial localization in NMR
Rath, A.R.
1985-01-01
Several unique coil configurations were developed that have applications in nuclear magnetic resonance. These include a number of designs appropriate for use as rf surface coils, and two configurations developed as NMR magnets. The magnetic field profiles were calculated for each of these designs, from which field strength and homogeneity information were obtained. The rf coil configurations modelled include the opposed loop, opposed half loop, bicycle wheel, opposed bicycle wheel, and semi-toroid. The opposed loop design was studied in detail in terms of the theoretical spatial sensitivity and selectivity it offers. A number of NMR experiments were performed to test the validity of these theoretical calculations. This configuration produces a field that is substantially reduced near the coil itself, compared with the field produced by a single loop surface coil, but that rises to a maximum along the coil axis yielding a somewhat homogeneous region that may be used to achieve a degree of spatial localization. Several comparison schemes are used to evaluate the relative advantages and disadvantages of both the single loop and the opposed loop coil. The opposed coil concept also has been applied to the design of magnets. The results of calculations on the homogeneity and field strength possible with an opposed solenoid magnet are presented.
Global/local methods research using the CSM testbed
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. Hayden, Jr.; Thompson, Danniella M.
1990-01-01
Research activities in global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.
Passive localization in ocean acoustics: A model-based approach
Candy, J.V.; Sullivan, E.J.
1995-09-01
A model-based approach is developed to solve the passive localization problem in ocean acoustics using the state-space formulation for the first time. It is shown that the inherent structure of the resulting processor consists of a parameter estimator coupled to a nonlinear optimization scheme. The parameter estimator is designed using the model-based approach in which an ocean acoustic propagation model is used in developing the model-based processor required for localization. Recall that model-based signal processing is a well-defined methodology enabling the inclusion of environmental (propagation) models, measurement (sensor arrays) models, and noise (shipping, measurement) models into a sophisticated processing algorithm. Here the parameter estimator is designed, or more appropriately the model-based identifier (MBID) for a propagation model developed from a shallow water ocean experiment. After simulation, it is then applied to a set of experimental data demonstrating the applicability of this approach. {copyright} {ital 1995} {ital Acoustical} {ital Society} {ital of} {ital America}.
An alternative subspace approach to EEG dipole source localization
NASA Astrophysics Data System (ADS)
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
An alternative subspace approach to EEG dipole source localization.
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-21
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist. PMID:15083674
NASA Astrophysics Data System (ADS)
Wang, Wei-Zong; Rong, Ming-Zhe; Yang, Fei; Wu, Yi
2014-03-01
The transport coefficients of high temperature sulfur hexafluoride (SF6) plasmas in local thermodynamic equilibrium are calculated using collision integrals derived in a phenomenological approach which could be a valuable tool in the calculation of complete data sets for complex mixtures, including interactions hardly handled in the accurate multipotential methods. A systematic comparison with transport coefficients obtained using an old data set and experimental test is performed to check the reliability of the proposed approach in evaluating transport cross sections.
AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT
A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...
Localization and cooperative communication methods for cognitive radio
NASA Astrophysics Data System (ADS)
Duval, Olivier
We study localization of nearby nodes and cooperative communication for cognitive radios. Cognitive radios sensing their environment to estimate the channel gain between nodes can cooperate and adapt their transmission power to maximize the capacity of the communication between two nodes. We study the end-to-end capacity of a cooperative relaying scheme using orthogonal frequency-division modulation (OFDM) modulation, under power constraints for both the base station and the relay station. The relay uses amplify-and-forward and decode-and-forward cooperative relaying techniques to retransmit messages on a subset of the available subcarriers. The power used in the base station and the relay station transmitters is allocated to maximize the overall system capacity. The subcarrier selection and power allocation are obtained based on convex optimization formulations and an iterative algorithm. Additionally, decode-and-forward relaying schemes are allowed to pair source and relayed subcarriers to increase further the capacity of the system. The proposed techniques outperforms non-selective relaying schemes over a range of relay power budgets. Cognitive radios can be used for opportunistic access of the radio spectrum by detecting spectrum holes left unused by licensed primary users. We introduce a spectrum holes detection approach, which combines blind modulation classification, angle of arrival estimation and number of sources detection. We perform eigenspace analysis to determine the number of sources, and estimate their angles of arrival (AOA). In addition, we classify detected sources as primary or secondary users with their distinct second-orde one-conjugate cyclostationarity features. Extensive simulations carried out indicate that the proposed system identifies and locates individual sources correctly, even at -4 dB signal-to-noise ratios (SNR). In environments with a high density of scatterers, several wireless channels experience nonline-of-sight (NLOS
EPR-based approach for the localization of paramagnetic metal ions in biomolecules.
Abdullin, Dinar; Florin, Nicole; Hagelueken, Gregor; Schiemann, Olav
2015-02-01
Metal ions play an important role in the catalysis and folding of proteins and oligonucleotides. Their localization within the three-dimensional fold of such biomolecules is therefore an important goal in understanding structure-function relationships. A trilateration approach for the localization of metal ions by means of long-range distance measurements based on electron paramagnetic resonance (EPR) is introduced. The approach is tested on the Cu(2+) center of azurin, and factors affecting the precision of the method are discussed. PMID:25522037
A Locally-Exact Homogenization Approach for Periodic Heterogeneous Materials
Drago, Anthony S.; Pindera, Marek-Jerzy
2008-02-15
Elements of the homogenization theory are utilized to develop a new micromechanics approach for unit cells of periodic heterogeneous materials based on locally-exact elasticity solutions. Closed-form expressions for the homogenized moduli of unidirectionally-reinforced heterogeneous materials are obtained in terms of Hill's strain concentration matrices valid under arbitrary combined loading, which yield the homogenized Hooke's law. Results for simple unit cells with off-set fibers, which require the use of periodic boundary conditions, are compared with corresponding finite-element results demonstrating excellent correlation.
Comparison of local grid refinement methods for MODFLOW
Mehl, S.; Hill, M.C.; Leake, S.A.
2006-01-01
Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).
Stochastic thermodynamics of reactive systems: An extended local equilibrium approach
NASA Astrophysics Data System (ADS)
De Decker, Yannick; Derivaux, Jean-François; Nicolis, Grégoire
2016-04-01
The recently developed extended local equilibrium approach to stochastic thermodynamics is applied to reactive systems. The properties of the fluctuating entropy and entropy production are analyzed for general linear and for prototypical nonlinear kinetic processes. It is shown that nonlinear kinetics typically induces deviations of the mean entropy production from its value in the deterministic (mean-field) limit. The probability distributions around the mean are derived and shown to qualitatively differ in thermodynamic equilibrium, under nonequilibrium conditions and in the vicinity of criticalities associated to the onset of multistability. In each case large deviation-type properties are shown to hold. The results are compared with those of alternative approaches developed in the literature.
Reactive Gas transport in soil: Kinetics versus Local Equilibrium Approach
NASA Astrophysics Data System (ADS)
Geistlinger, Helmut; Jia, Ruijan
2010-05-01
Gas transport through the unsaturated soil zone was studied using an analytical solution of the gas transport model that is mathematically equivalent to the Two-Region model. The gas transport model includes diffusive and convective gas fluxes, interphase mass transfer between the gas and water phase, and biodegradation. The influence of non-equilibrium phenomena, spatially variable initial conditions, and transient boundary conditions are studied. The objective of this paper is to compare the kinetic approach for interphase mass transfer with the standard local equilibrium approach and to find conditions and time-scales under which the local equilibrium approach is justified. The time-scale of investigation was limited to the day-scale, because this is the relevant scale for understanding gas emission from the soil zone with transient water saturation. For the first time a generalized mass transfer coefficient is proposed that justifies the often used steady-state Thin-Film mass transfer coefficient for small and medium water-saturated aggregates of about 10 mm. The main conclusion from this study is that non-equilibrium mass transfer depends strongly on the temporal and small-scale spatial distribution of water within the unsaturated soil zone. For regions with low water saturation and small water-saturated aggregates (radius about 1 mm) the local equilibrium approach can be used as a first approximation for diffusive gas transport. For higher water saturation and medium radii of water-saturated aggregates (radius about 10 mm) and for convective gas transport, the non-equilibrium effect becomes more and more important if the hydraulic residence time and the Damköhler number decrease. Relative errors can range up to 100% and more. While for medium radii the local equilibrium approach describes the main features both of the spatial concentration profile and the time-dependence of the emission rate, it fails completely for larger aggregates (radius about 100 mm
Local knowledge in community-based approaches to medicinal plant conservation: lessons from India
Shukla, Shailesh; Gardner, James
2006-01-01
Background Community-based approaches to conservation of natural resources, in particular medicinal plants, have attracted attention of governments, non governmental organizations and international funding agencies. This paper highlights the community-based approaches used by an Indian NGO, the Rural Communes Medicinal Plant Conservation Centre (RCMPCC). The RCMPCC recognized and legitimized the role of local medicinal knowledge along with other knowledge systems to a wider audience, i.e. higher levels of government. Methods Besides a review of relevant literature, the research used a variety of qualitative techniques, such as semi-structured, in-depth interviews and participant observations in one of the project sites of RCMPCC. Results The review of local medicinal plant knowledge systems reveals that even though medicinal plants and associated knowledge systems (particularly local knowledge) are gaining wider recognition at the global level, the efforts to recognize and promote the un-codified folk systems of medicinal knowledge are still inadequate. In country like India, such neglect is evident through the lack of legal recognition and supporting policies. On the other hand, community-based approaches like local healers' workshops or village biologist programs implemented by RCMPCC are useful in combining both local (folk and codified) and formal systems of medicine. Conclusion Despite the high reliance on the local medicinal knowledge systems for health needs in India, the formal policies and national support structures are inadequate for traditional systems of medicine and almost absent for folk medicine. On the other hand, NGOs like the RCMPCC have demonstrated that community-based and local approaches such as local healer's workshops and village biologist program can synergistically forge linkages between local knowledge with the formal sciences (in this case botany and ecology) and generate positive impacts at various levels. PMID:16603082
A Cross-Site Visual Localization Method for Yutu Rover
NASA Astrophysics Data System (ADS)
Wan, W.; Liu, Z.; Di, K.; Wang, B.; Zhou, J.
2014-04-01
Localization of the rover is critical to support science and engineering operations in planetary rover missions, such as rover traverse planning and hazard avoidance. It is desirable for planetary rover to have visual localization capability with high degree of automation and quick turnaround time. In this research, we developed a visual localization method for lunar rover, which is capable of deriving accurate localization results from cross-site stereo images. Tie points are searched in correspondent areas predicted by initial localization results and determined by ASIFT matching algorithm. Accurate localization results are derived from bundle adjustment based on an image network constructed by the tie points. In order to investigate the performance of proposed method, theoretical accuracy analysis on is implemented by means of error propagation principles. Field experiments were conducted to verify the effectiveness of the proposed method in practical applications. Experiment results prove that the proposed method provides more accurate localization results (1 %~4 %) than dead-reckoning. After more validations and enhancements, the developed rover localization method has been successfully used in Chang'e-3 mission operations.
Adaptive windowed range-constrained Otsu method using local information
NASA Astrophysics Data System (ADS)
Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie
2016-01-01
An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.
NASA Astrophysics Data System (ADS)
Loukianov, Andrey A.; Sugisaka, Masanori
This paper presents a vision and landmark based approach to improve the efficiency of probability grid Markov localization for mobile robots. The proposed approach uses visual landmarks that can be detected by a rotating video camera on the robot. We assume that visual landmark positions in the map are known and that each landmark can be assigned to a certain landmark class. The method uses classes of observed landmarks and their relative arrangement to select regions in the robot posture space where the location probability density function is to be updated. Subsequent computations are performed only in these selected update regions thus the computational workload is significantly reduced. Probabilistic landmark-based localization method, details of the map and robot perception are discussed. A technique to compute the update regions and their parameters for selective computation is introduced. Simulation results are presented to show the effectiveness of the approach.
NASA Astrophysics Data System (ADS)
Mustafa, Jamal I.; Coh, Sinisa; Cohen, Marvin L.; Louie, Steven G.
2015-10-01
Maximally localized Wannier functions are widely used in electronic structure theory for analyses of bonding, electric polarization, orbital magnetization, and for interpolation. The state of the art method for their construction is based on the method of Marzari and Vanderbilt. One of the practical difficulties of this method is guessing functions (initial projections) that approximate the final Wannier functions. Here we present an approach based on optimized projection functions that can construct maximally localized Wannier functions without a guess. We describe and demonstrate this approach on several realistic examples.
A Hartree-Fock study of the confined helium atom: Local and global basis set approaches
NASA Astrophysics Data System (ADS)
Young, Toby D.; Vargas, Rubicelia; Garza, Jorge
2016-02-01
Two different basis set methods are used to calculate atomic energy within Hartree-Fock theory. The first is a local basis set approach using high-order real-space finite elements and the second is a global basis set approach using modified Slater-type orbitals. These two approaches are applied to the confined helium atom and are compared by calculating one- and two-electron contributions to the total energy. As a measure of the quality of the electron density, the cusp condition is analyzed.
Multilevel local refinement and multigrid methods for 3-D turbulent flow
Liao, C.; Liu, C.; Sung, C.H.; Huang, T.T.
1996-12-31
A numerical approach based on multigrid, multilevel local refinement, and preconditioning methods for solving incompressible Reynolds-averaged Navier-Stokes equations is presented. 3-D turbulent flow around an underwater vehicle is computed. 3 multigrid levels and 2 local refinement grid levels are used. The global grid is 24 x 8 x 12. The first patch is 40 x 16 x 20 and the second patch is 72 x 32 x 36. 4th order artificial dissipation are used for numerical stability. The conservative artificial compressibility method are used for further improvement of convergence. To improve the accuracy of coarse/fine grid interface of local refinement, flux interpolation method for refined grid boundary is used. The numerical results are in good agreement with experimental data. The local refinement can improve the prediction accuracy significantly. The flux interpolation method for local refinement can keep conservation for a composite grid, therefore further modify the prediction accuracy.
A Non-Local Low-Rank Approach to Enforce Integrability.
Badri, Hicham; Yahia, Hussein
2016-08-01
We propose a new approach to enforce integrability using recent advances in non-local methods. Our formulation consists in a sparse gradient data-fitting term to handle outliers together with a gradient-domain non-local low-rank prior. This regularization has two main advantages: 1) the low-rank prior ensures similarity between non-local gradient patches, which helps recovering high-quality clean patches from severe outliers corruption and 2) the low-rank prior efficiently reduces dense noise as it has been shown in recent image restoration works. We propose an efficient solver for the resulting optimization formulation using alternate minimization. Experiments show that the new method leads to an important improvement compared with previous optimization methods and is able to efficiently handle both outliers and dense noise mixed together. PMID:27214898
Localized Surface Plasmon Resonance Biosensing: Current Challenges and Approaches
Unser, Sarah; Bruzas, Ian; He, Jie; Sagle, Laura
2015-01-01
Localized surface plasmon resonance (LSPR) has emerged as a leader among label-free biosensing techniques in that it offers sensitive, robust, and facile detection. Traditional LSPR-based biosensing utilizes the sensitivity of the plasmon frequency to changes in local index of refraction at the nanoparticle surface. Although surface plasmon resonance technologies are now widely used to measure biomolecular interactions, several challenges remain. In this article, we have categorized these challenges into four categories: improving sensitivity and limit of detection, selectivity in complex biological solutions, sensitive detection of membrane-associated species, and the adaptation of sensing elements for point-of-care diagnostic devices. The first section of this article will involve a conceptual discussion of surface plasmon resonance and the factors affecting changes in optical signal detected. The following sections will discuss applications of LSPR biosensing with an emphasis on recent advances and approaches to overcome the four limitations mentioned above. First, improvements in limit of detection through various amplification strategies will be highlighted. The second section will involve advances to improve selectivity in complex media through self-assembled monolayers, “plasmon ruler” devices involving plasmonic coupling, and shape complementarity on the nanoparticle surface. The following section will describe various LSPR platforms designed for the sensitive detection of membrane-associated species. Finally, recent advances towards multiplexed and microfluidic LSPR-based devices for inexpensive, rapid, point-of-care diagnostics will be discussed. PMID:26147727
Global and Local Sensitivity Analysis Methods for a Physical System
ERIC Educational Resources Information Center
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-01-01
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms. PMID:27258279
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-01-01
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms. PMID:27258279
Damping filter method for obtaining spatially localized solutions.
Teramura, Toshiki; Toh, Sadayoshi
2014-05-01
Spatially localized structures are key components of turbulence and other spatiotemporally chaotic systems. From a dynamical systems viewpoint, it is desirable to obtain corresponding exact solutions, though their existence is not guaranteed. A damping filter method is introduced to obtain variously localized solutions and adapted in two typical cases. This method introduces a spatially selective damping effect to make a good guess at the exact solution, and we can obtain an exact solution through a continuation with the damping amplitude. The first target is a steady solution to the Swift-Hohenberg equation, which is a representative of bistable systems in which localized solutions coexist and a model for spanwise-localized cases. Not only solutions belonging to the well-known snaking branches but also those belonging to isolated branches known as "isolas" are found with continuation paths between them in phase space extended with the damping amplitude. This indicates that this spatially selective excitation mechanism has an advantage in searching spatially localized solutions. The second target is a spatially localized traveling-wave solution to the Kuramoto-Sivashinsky equation, which is a model for streamwise-localized cases. Since the spatially selective damping effect breaks Galilean and translational invariances, the propagation velocity cannot be determined uniquely while the damping is active, and a singularity arises when these invariances are recovered. We demonstrate that this singularity can be avoided by imposing a simple condition, and a localized traveling-wave solution is obtained with a specific propagation speed. PMID:25353864
Community-Based Outdoor Education Using a Local Approach to Conservation
ERIC Educational Resources Information Center
Maeda, Kazushi
2005-01-01
Local people of a community interact with nature in a way that is mediated by their local cultures and shape their own environment. We need a local approach to conservation for the local environment adding to the political or technological approaches for global environmental problems such as the destruction of the ozone layer or global warming.…
The auxiliary Hamiltonian approach and its generalization to non-local self-energies
NASA Astrophysics Data System (ADS)
Balzer, Karsten
2016-03-01
The recently introduced auxiliary Hamiltonian approach [Balzer K and Eckstein M 2014 Phys. Rev. B 89 035148] maps the problem of solving the two-time Kadanoff-Baym equations onto a noninteracting auxiliary system with additional bath degrees of freedom. While the original paper restricts the discussion to spatially local self-energies, we show that there exists a rather straightforward generalization to treat also non-local correlation effects. The only drawback is the loss of time causality due to a combined singular value and eigen decomposition of the two-time self-energy, the application of which inhibits one to establish the self-consistency directly on the time step. For derivation and illustration of the method, we consider the Hubbard model in one dimension and study the decay of the Néel state in the weak-coupling regime, using the local and non-local second-order Born approximation.
Ahmad, Munir; Shahzad, Tasawar; Masood, Khalid; Rashid, Khalid; Tanveer, Muhammad; Iqbal, Rabail; Hussain, Nasir; Shahid, Abubakar; Fazal-E-Aleem
2016-06-01
Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images. PMID:26714680
Multi-Scale Jacobi Method for Anderson Localization
NASA Astrophysics Data System (ADS)
Imbrie, John Z.
2015-11-01
A new KAM-style proof of Anderson localization is obtained. A sequence of local rotations is defined, such that off-diagonal matrix elements of the Hamiltonian are driven rapidly to zero. This leads to the first proof via multi-scale analysis of exponential decay of the eigenfunction correlator (this implies strong dynamical localization). The method has been used in recent work on many-body localization (Imbrie in On many-body localization for quantum spin chains,
New approach to workpiece localization in subaperture stitching interferometric testing
NASA Astrophysics Data System (ADS)
Zhang, Pengfei; Zhao, Hong; Jiang, Tao; Li, Jinjun; Zhou, Xiang; Zhang, Lu
2009-06-01
Large aperture optics have been used more and more widely in modern optical system. But the testing of its surface quality is very difficult. The circular sub-aperture stitching (CSAS) testing method can effectively extend the interferometer's vertical dynamic range and enhance its lateral resolution, so it may be the best solution to the testing of large aperture optics. Actually, the CSAS method can be viewed as a special workpiece localization problem. If the pose data of all sub-apertures obtained are accurate enough, the sub-aperture data can be directly stitched together to create a map of the full aperture. In this paper, a CSAS system will be introduced. Its motion mechanism has seven degrees of freedom. This brings some trouble for obtaining the optics' accurate pose data along with the motion error's accumulation. So a stereovision system is added. By exploiting appropriate scheme and algorithm, it can directly give out the optics' accurate pose data. This provides an effective initial value for the stitching algorithm. Finally, a 150mm flat and a 100mm convex sphere is tested using this method, and the experimental results is given to show the effect of this method and the efficiency of the CSAS system.
Multiblock approach for the passive scalar thermal lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Huang, Rongzong; Wu, Huiying
2014-04-01
A multiblock approach for the passive scalar thermal lattice Boltzmann method (TLBM) with multiple-relaxation-time collision scheme is proposed based on the Chapman-Enskog analysis. The interaction between blocks is executed in the moment space directly and an external force term is considered. Theoretical analysis shows that all the nonequilibrium parts of the nonconserved moments should be rescaled, while the nonequilibrium parts of the conserved moments can be calculated directly. Moreover, a local scheme based on the pseudoparticles for computing heat flux is proposed with no need to calculate temperature gradient based on the finite-difference scheme. In order to validate the multiblock approach and local scheme for computing heat flux, thermal Couette flow with wall injection is simulated and good results are obtained, which show that the adoption of the multiblock approach does not deteriorate the convergence rate of TLBM and the local scheme for computing heat flux has second-order convergence rate. Further application of the present approach is the simulation of natural convection in a square cavity with the Rayleigh number up to 109.
A novel local learning based approach with application to breast cancer diagnosis
NASA Astrophysics Data System (ADS)
Xu, Songhua; Tourassi, Georgia
2012-03-01
In this paper, we introduce a new local learning based approach and apply it for the well-studied problem of breast cancer diagnosis using BIRADS-based mammographic features. To learn from our clinical dataset the latent relationship between these features and the breast biopsy result, our method first dynamically partitions the whole sample population into multiple sub-population groups through stochastically searching the sample population clustering space. Each encountered clustering scheme in our online searching process is then used to create a certain sample population partition plan. For every resultant sub-population group identified according to a partition plan, our method then trains a dedicated local learner to capture the underlying data relationship. In our study, we adopt the linear logistic regression model as our local learning method's base learner. Such a choice is made both due to the well-understood linear nature of the problem, which is compellingly revealed by a rich body of prior studies, and the computational efficiency of linear logistic regression--the latter feature allows our local learning method to more effectively perform its search in the sample population clustering space. Using a database of 850 biopsy-proven cases, we compared the performance of our method with a large collection of publicly available state-of-the-art machine learning methods and successfully demonstrated its performance advantage with statistical significance.
Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids
Stoyanov, Miroslav K
2013-09-01
We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.
NASA Astrophysics Data System (ADS)
Rad, Jamal Amani; Parand, Kourosh; Abbasbandy, Saeid
2015-05-01
For the first time in mathematical finance field, we propose the local weak form meshless methods for option pricing; especially in this paper we select and analysis two schemes of them named local boundary integral equation method (LBIE) based on moving least squares approximation (MLS) and local radial point interpolation (LRPI) based on Wu's compactly supported radial basis functions (WCS-RBFs). LBIE and LRPI schemes are the truly meshless methods, because, a traditional non-overlapping, continuous mesh is not required, either for the construction of the shape functions, or for the integration of the local sub-domains. In this work, the American option which is a free boundary problem, is reduced to a problem with fixed boundary using a Richardson extrapolation technique. Then the θ -weighted scheme is employed for the time derivative. Stability analysis of the methods is analyzed and performed by the matrix method. In fact, based on an analysis carried out in the present paper, the methods are unconditionally stable for implicit Euler (θ = 0) and Crank-Nicolson (θ = 0.5) schemes. It should be noted that LBIE and LRPI schemes lead to banded and sparse system matrices. Therefore, we use a powerful iterative algorithm named the Bi-conjugate gradient stabilized method (BCGSTAB) to get rid of this system. Numerical experiments are presented showing that the LBIE and LRPI approaches are extremely accurate and fast.
A novel approach for SEMG signal classification with adaptive local binary patterns.
Ertuğrul, Ömer Faruk; Kaya, Yılmaz; Tekin, Ramazan
2016-07-01
Feature extraction plays a major role in the pattern recognition process, and this paper presents a novel feature extraction approach, adaptive local binary pattern (aLBP). aLBP is built on the local binary pattern (LBP), which is an image processing method, and one-dimensional local binary pattern (1D-LBP). In LBP, each pixel is compared with its neighbors. Similarly, in 1D-LBP, each data in the raw is judged against its neighbors. 1D-LBP extracts feature based on local changes in the signal. Therefore, it has high a potential to be employed in medical purposes. Since, each action or abnormality, which is recorded in SEMG signals, has its own pattern, and via the 1D-LBP these (hidden) patterns may be detected. But, the positions of the neighbors in 1D-LBP are constant depending on the position of the data in the raw. Also, both LBP and 1D-LBP are very sensitive to noise. Therefore, its capacity in detecting hidden patterns is limited. To overcome these drawbacks, aLBP was proposed. In aLBP, the positions of the neighbors and their values can be assigned adaptively via the down-sampling and the smoothing coefficients. Therefore, the potential to detect (hidden) patterns, which may express an illness or an action, is really increased. To validate the proposed feature extraction approach, two different datasets were employed. Achieved accuracies by the proposed approach were higher than obtained results by employed popular feature extraction approaches and the reported results in the literature. Obtained accuracy results were brought out that the proposed method can be employed to investigate SEMG signals. In summary, this work attempts to develop an adaptive feature extraction scheme that can be utilized for extracting features from local changes in different categories of time-varying signals. PMID:26718556
Intelligent Resource Management for Local Area Networks: Approach and Evolution
NASA Technical Reports Server (NTRS)
Meike, Roger
1988-01-01
The Data Management System network is a complex and important part of manned space platforms. Its efficient operation is vital to crew, subsystems and experiments. AI is being considered to aid in the initial design of the network and to augment the management of its operation. The Intelligent Resource Management for Local Area Networks (IRMA-LAN) project is concerned with the application of AI techniques to network configuration and management. A network simulation was constructed employing real time process scheduling for realistic loads, and utilizing the IEEE 802.4 token passing scheme. This simulation is an integral part of the construction of the IRMA-LAN system. From it, a causal model is being constructed for use in prediction and deep reasoning about the system configuration. An AI network design advisor is being added to help in the design of an efficient network. The AI portion of the system is planned to evolve into a dynamic network management aid. The approach, the integrated simulation, project evolution, and some initial results are described.
Perturbation approach applied to modal diffraction methods.
Bischoff, Joerg; Hehl, Karl
2011-05-01
Eigenvalue computation is an important part of many modal diffraction methods, including the rigorous coupled wave approach (RCWA) and the Chandezon method. This procedure is known to be computationally intensive, accounting for a large proportion of the overall run time. However, in many cases, eigenvalue information is already available from previous calculations. Some of the examples include adjacent slices in the RCWA, spectral- or angle-resolved scans in optical scatterometry and parameter derivatives in optimization. In this paper, we present a new technique that provides accurate and highly reliable solutions with significant improvements in computational time. The proposed method takes advantage of known eigensolution information and is based on perturbation method. PMID:21532698
A method of periodic pattern localization on document images
NASA Astrophysics Data System (ADS)
Chernov, Timofey S.; Nikolaev, Dmitry P.; Kliatskine, Vitali M.
2015-12-01
Periodic patterns often present on document images as holograms, watermarks or guilloche elements which are mostly used for fraud protection. Localization of such patterns lets an embedded OCR system to vary its settings depending on pattern presence in particular image regions and improves the precision of pattern removal to preserve as much useful data as possible. Many document images' noise detection and removal methods deal with unstructured noise or clutter on documents with simple background. In this paper we propose a method of periodic pattern localization on document images which uses discrete Fourier transform that works well on documents with complex background.
Hashemiyan, Z; Packo, P; Staszewski, W J; Uhl, T
2016-01-01
Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808
Packo, P.; Staszewski, W. J.; Uhl, T.
2016-01-01
Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808
A Bayesian network approach for modeling local failure in lung cancer
NASA Astrophysics Data System (ADS)
Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam
2011-03-01
Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method. PMID:25119982
A new approach to the photon localization problem
NASA Technical Reports Server (NTRS)
Han, D.; Kim, Y. S.; Noz, Marilyn E.
1994-01-01
Since wavelets form a representation of the Poincare group, it is possible to construct a localized superposition of light waves with different frequencies in a Lorentz-covariant manner. This localized wavelet satisfies a Lorentz-invariant uncertainty relation, and also the Lorentz-invariant Parseval's relation. A quantitative analysis is given for the difference between photons and localized waves. It is then shown that this localized entity corresponds to a relativistic photon with a sharply defined momentum in the non-localization limit. Waves are not particles. It is confirmed that the wave-particle duality is subject to the uncertainty principle.
A unified approach to global and local beam position feedback
Chung, Y.
1994-08-01
The Advanced Photon Source (APS) will implement both global and local beam position feedback systems to stabilize the particle and X-ray beams for the storage ring. The global feedback system uses 40 BPMs and 40 correctors per plane. Singular value decomposition (SVD) of the response matrix is used for closed orbit correction. The local feedback system uses two X-ray BPMS, two rf BPMS, and the four-magnet local bump to control the angle and displacement of the X-ray beam from a bending magnet or an insertion device. Both the global and local feedback systems are based on digital signal processing (DSP) running at 4-kHz sampling rate with a proportional, integral, and derivative (PID) control algorithm. In this paper, we will discuss resolution of the conflict among multiple local feedback systems due to local bump closure error and decoupling of the global and local feedback systems to maximize correction efficiency. In this scheme, the global feedback system absorbs the local bump closure error and the local feedback systems compensate for the effect of global feedback on the local beamlines. The required data sharing between the global and local feedback systems is done through the fiber-optically networked reflective memory.
The local projection in the density functional theory plus U approach: A critical assessment
NASA Astrophysics Data System (ADS)
Wang, Yue-Chao; Chen, Ze-Hua; Jiang, Hong
2016-04-01
Density-functional theory plus the Hubbard U correction (DFT + U) method is widely used in first-principles studies of strongly correlated systems, as it can give qualitatively (and sometimes, semi-quantitatively) correct description of energetic and structural properties of many strongly correlated systems with similar computational cost as local density approximation or generalized gradient approximation. On the other hand, the DFT + U approach is limited both theoretically and practically in several important aspects. In particular, the results of DFT + U often depend on the choice of local orbitals (the local projection) defining the subspace in which the Hubbard U correction is applied. In this work we have systematically investigated the issue of the local projection by considering typical transition metal oxides, β-MnO2 and MnO, and comparing the results obtained from different implementations of DFT + U. We found that the choice of the local projection has significant effects on the DFT + U results, which are more significant for systems with stronger covalent bonding (e.g., MnO2) than those with more ionic bonding (e.g., MnO). These findings can help to clarify some confusion arising from the practical use of DFT + U and may also provide insights for the development of new first-principles approaches beyond DFT + U.
Simulation of geochemical localization using a multi-porosity reactive transport approach
NASA Astrophysics Data System (ADS)
Soler, Joaquim; Luquot, Linda; Martinez-Perez, Laura; Saaltink, Maarten; De Gaspari, Francesca; Carrera, Jesus
2016-04-01
Results of reactive transport laboratory experiments often suggest that pore scale heterogeneity induces localization of reactions (the generation of local micro environments favoring reactions that would not occur in a well-mixed Representative Elementary Volume, REV). Multi-Rate Mass Transfer (MRMT), which has been employed to reproduce hydrodynamic heterogeneity, may also be used to simulate geochemical localization. We extended the Water Mixing Approach (WMA) designed for single porosity media, to simulate chemical reactions caused by the mixing of mobile and immobile zones. The method is termed Multi-Rate Water Mixing (MRWM). The MRWM approach was employed to simulate laboratory experiments of CO2-rich brine transport through carbonate rich samples (Luquot et al. 2016). Chemical heterogeneity in space was reproduced by varying the mineral assemblages in immobile regions. This enabled us to reproduce the generally low pH environment while allowing for high pH local zones required for the localized precipitation of kaolinite, which has been observed in reality, but cannot be modeled with conventional reactive transport formulations. The resulting model is very rich, in that it can reproduce a broad range of pore scale processes in a Darcy scale model, and complex, in that the interaction between chemical kinetics and immobile zones physical parameters is non-trivial.
The local projection in the density functional theory plus U approach: A critical assessment.
Wang, Yue-Chao; Chen, Ze-Hua; Jiang, Hong
2016-04-14
Density-functional theory plus the Hubbard U correction (DFT + U) method is widely used in first-principles studies of strongly correlated systems, as it can give qualitatively (and sometimes, semi-quantitatively) correct description of energetic and structural properties of many strongly correlated systems with similar computational cost as local density approximation or generalized gradient approximation. On the other hand, the DFT + U approach is limited both theoretically and practically in several important aspects. In particular, the results of DFT + U often depend on the choice of local orbitals (the local projection) defining the subspace in which the Hubbard U correction is applied. In this work we have systematically investigated the issue of the local projection by considering typical transition metal oxides, β-MnO2 and MnO, and comparing the results obtained from different implementations of DFT + U. We found that the choice of the local projection has significant effects on the DFT + U results, which are more significant for systems with stronger covalent bonding (e.g., MnO2) than those with more ionic bonding (e.g., MnO). These findings can help to clarify some confusion arising from the practical use of DFT + U and may also provide insights for the development of new first-principles approaches beyond DFT + U. PMID:27083707
NASA Astrophysics Data System (ADS)
Kópházi, József; Lathouwers, Danny
2015-09-01
In this paper a new method for the discretization of the radiation transport equation is presented, based on a discontinuous Galerkin method in space and angle that allows for local refinement in angle where any spatial element can support its own angular discretization. To cope with the discontinuous spatial nature of the solution, a generalized Riemann procedure is required to distinguish between incoming and outgoing contributions of the numerical fluxes. A new consistent framework is introduced that is based on the solution of a generalized eigenvalue problem. The resulting numerical fluxes for the various possible cases where neighboring elements have an equal, higher or lower level of refinement in angle are derived based on tensor algebra and the resulting expressions have a very clear physical interpretation. The choice of discontinuous trial functions not only has the advantage of easing local refinement, it also facilitates the use of efficient sweep-based solvers due to decoupling of unknowns on a large scale thereby approaching the efficiency of discrete ordinates methods with local angular resolution. The approach is illustrated by a series of numerical experiments. Results show high orders of convergence for the scalar flux on angular refinement. The generalized Riemann upwinding procedure leads to stable and consistent solutions. Further the sweep-based solver performs well when used as a preconditioner for a Krylov method.
Kópházi, József Lathouwers, Danny
2015-09-15
In this paper a new method for the discretization of the radiation transport equation is presented, based on a discontinuous Galerkin method in space and angle that allows for local refinement in angle where any spatial element can support its own angular discretization. To cope with the discontinuous spatial nature of the solution, a generalized Riemann procedure is required to distinguish between incoming and outgoing contributions of the numerical fluxes. A new consistent framework is introduced that is based on the solution of a generalized eigenvalue problem. The resulting numerical fluxes for the various possible cases where neighboring elements have an equal, higher or lower level of refinement in angle are derived based on tensor algebra and the resulting expressions have a very clear physical interpretation. The choice of discontinuous trial functions not only has the advantage of easing local refinement, it also facilitates the use of efficient sweep-based solvers due to decoupling of unknowns on a large scale thereby approaching the efficiency of discrete ordinates methods with local angular resolution. The approach is illustrated by a series of numerical experiments. Results show high orders of convergence for the scalar flux on angular refinement. The generalized Riemann upwinding procedure leads to stable and consistent solutions. Further the sweep-based solver performs well when used as a preconditioner for a Krylov method.
Tracking local anesthetic effects using a novel perceptual reference approach.
Ettlin, Dominik A; Lukic, Nenad; Abazi, Jetmir; Widmayer, Sonja; Meier, Michael L
2016-03-01
Drug effects of loco-regional anesthetics are commonly measured by unidimensional pain rating scales. These scales require subjects to transform their perceptual correlates of stimulus intensities onto a visual, verbal, or numerical construct that uses a unitless cognitive reference frame. The conceptual understanding and execution of this magnitude estimation task may vary among individuals and populations. To circumvent inherent shortcomings of conventional experimental pain scales, this study used a novel perceptual reference approach to track subjective sensory perceptions during onset of an analgesic nerve block. In 34 male subjects, nociceptive electric stimuli of 1-ms duration were repetitively applied to left (target) and right (reference) mandibular canines every 5 s for 600 s, with a side latency of 1 ms. Stimulus strength to the target canine was programmed to evoke a tolerable pain intensity perception and remained constant at this level throughout the experiment. A dose of 0.6 ml of articaine 4% was submucosally injected at the left mental foramen. Subjects then reported drug effects by adjusting the stimulus strength (in milliamperes) to the reference tooth, so that the perceived intensity in the reference tooth was equi-intense to the target tooth. Pain and stimulus perception offsets were indicated by subjects. Thus, the current approach for matching the sensory experience in one anatomic location after regional anesthesia allows detailed tracking of evolving perceptual changes in another location. This novel perceptual reference approach facilitates direct and accurate quantification of analgesic effects with high temporal resolution. We propose using this method for future experimental investigations of analgesic/anesthetic drug efficacy. PMID:26792885
Fault Location Methods for Ungrounded Distribution Systems Using Local Measurements
NASA Astrophysics Data System (ADS)
Xiu, Wanjing; Liao, Yuan
2013-08-01
This article presents novel fault location algorithms for ungrounded distribution systems. The proposed methods are capable of locating faults by using obtained voltage and current measurements at the local substation. Two types of fault location algorithms, using line to neutral and line to line measurements, are presented. The network structure and parameters are assumed to be known. The network structure needs to be updated based on information obtained from utility telemetry system. With the help of bus impedance matrix, local voltage changes due to the fault can be expressed as a function of fault currents. Since the bus impedance matrix contains information about fault location, superimposed voltages at local substation can be expressed as a function of fault location, through which fault location can be solved. Simulation studies have been carried out based on a sample distribution power system. From the evaluation study, it is evinced that very accurate fault location estimates are obtained from both types of methods.
A special purpose knowledge-based face localization method
NASA Astrophysics Data System (ADS)
Hassanat, Ahmad; Jassim, Sabah
2008-04-01
This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.
A hybrid approach to detect and localize texts in natural scene images.
Pan, Yi-Feng; Hou, Xinwen; Liu, Cheng-Lin
2011-03-01
Text detection and localization in natural scene images is important for content-based image analysis. This problem is challenging due to the complex background, the non-uniform illumination, the variations of text font, size and line orientation. In this paper, we present a hybrid approach to robustly detect and localize texts in natural scene images. A text region detector is designed to estimate the text existing confidence and scale information in image pyramid, which help segment candidate text components by local binarization. To efficiently filter out the non-text components, a conditional random field (CRF) model considering unary component properties and binary contextual component relationships with supervised parameter learning is proposed. Finally, text components are grouped into text lines/words with a learning-based energy minimization method. Since all the three stages are learning-based, there are very few parameters requiring manual tuning. Experimental results evaluated on the ICDAR 2005 competition dataset show that our approach yields higher precision and recall performance compared with state-of-the-art methods. We also evaluated our approach on a multilingual image dataset with promising results. PMID:20813645
A Novel Local Learning based Approach With Application to Breast Cancer Diagnosis
Xu, Songhua; Tourassi, Georgia
2012-01-01
The purpose of this study is to develop and evaluate a novel local learning-based approach for computer-assisted diagnosis of breast cancer. Our new local learning based algorithm using the linear logistic regression method as its base learner is described. Overall, our algorithm will perform its stochastic searching process until the total allowed computing time is used up by our random walk process in identifying the most suitable population subdivision scheme and their corresponding individual base learners. The proposed local learning-based approach was applied for the prediction of breast cancer given 11 mammographic and clinical findings reported by physicians using the BI-RADS lexicon. Our database consisted of 850 patients with biopsy confirmed diagnosis (290 malignant and 560 benign). We also compared the performance of our method with a collection of publicly available state-of-the-art machine learning methods. Predictive performance for all classifiers was evaluated using 10-fold cross validation and Receiver Operating Characteristics (ROC) analysis. Figure 1 reports the performance of 54 machine learning methods implemented in the machine learning toolkit Weka (version 3.0). We introduced a novel local learning-based classifier and compared it with an extensive list of other classifiers for the problem of breast cancer diagnosis. Our experiments show that the algorithm superior prediction performance outperforming a wide range of other well established machine learning techniques. Our conclusion complements the existing understanding in the machine learning field that local learning may capture complicated, non-linear relationships exhibited by real-world datasets.
A Multi-species Chemistry Approach of Io's Local Interaction
NASA Astrophysics Data System (ADS)
Dols, V.; Delamere, P.; Bagenal, F.
2006-12-01
The interaction between Io's atmosphere and the plasma torus is a source of energy and momentum, a supply of fresh plasma to the Jovian magnetosphere and a cause of auroral and radio emissions in Jupiter's atmosphere. Previously, this interaction has been extensively studied focusing on the electrodynamics in a single- species description. Our approach focuses on the multi-species chemistry as the plasma in the torus impinges on Io's atomic and molecular neutral atmosphere. We have adapted our physical chemistry model of the Io plasma torus (Delamere and Bagenal, JGR, 2003) to investigate the time evolution of mass and energy for a homogenous volume (i.e. flux tube) for an ensemble of plasma streamlines by Io. We seek to answer three basic questions: (1) Which neutral species play dominant roles in the multi-species chemistry? (2) Which are the dominant chemical reactions? (3) Where do plasma production and mass loading take place? The model simplifies the electrodynamics of the Io interaction to the analytical description of the plasma flow around a conducting sphere (Io) from Barnett, JGR 1986. Our goal is to explore the sensitivity of the plasma production and mass loading rates to parameters of the model (inflowing plasma conditions, neutral cloud composition and distribution) which we compare with the Galileo flyby data. We emphasize the dominant role of SO2 in the multi-species chemistry, the dominant role of the SO2+/SO2 charge exchange (Johnson, priv. comm.) and map the regions of mass loading and pick up current. The model suggests that most of the plasma provided to the torus comes from the extended neutral clouds and very little from the local interaction.
A Retrospective Approach to Testing the DNA Barcoding Method
Chapple, David G.; Ritchie, Peter A.
2013-01-01
A decade ago, DNA barcoding was proposed as a standardised method for identifying existing species and speeding the discovery of new species. Yet, despite its numerous successes across a range of taxa, its frequent failures have brought into question its accuracy as a short-cut taxonomic method. We use a retrospective approach, applying the method to the classification of New Zealand skinks as it stood in 1977 (primarily based upon morphological characters), and compare it to the current taxonomy reached using both morphological and molecular approaches. For the 1977 dataset, DNA barcoding had moderate-high success in identifying specimens (78-98%), and correctly flagging specimens that have since been confirmed as distinct taxa (77-100%). But most matching methods failed to detect the species complexes that were present in 1977. For the current dataset, there was moderate-high success in identifying specimens (53-99%). For both datasets, the capacity to discover new species was dependent on the methodological approach used. Species delimitation in New Zealand skinks was hindered by the absence of either a local or global barcoding gap, a result of recent speciation events and hybridisation. Whilst DNA barcoding is potentially useful for specimen identification and species discovery in New Zealand skinks, its error rate could hinder the progress of documenting biodiversity in this group. We suggest that integrated taxonomic approaches are more effective at discovering and describing biodiversity. PMID:24244283
A novel method for medical implant in-body localization.
Pourhomayoun, Mohammad; Fowler, Mark; Jin, Zhanpeng
2012-01-01
Wireless communication medical implants are gaining an important role in healthcare systems by controlling and transmitting the vital information of the patients. Recently, Wireless Capsule Endoscopy (WCE) has become a popular method to visualize and diagnose the human gastrointestinal (GI) tract. Estimating the exact location of the capsule when each image is taken is a very critical issue in capsule endoscopy. Most of the common capsule localization methods are based on estimating one or more location-dependent signal parameters like TOA or RSS. However, some unique challenges exist for in-body localization due to the complex nature within the human body. In this paper, we propose a novel one-stage localization method based on spatial sparsity in 3D space. In this method, we directly estimate the location of the capsule (as the emitter) without going through the intermediate stage of TOA or signal strength estimation. We evaluate the performance of the proposed method using Monte Carlo simulation with an RF signal following the allowable power and bandwidth ranges according to the standards. The results show that the proposed method is very effective and accurate even in massive multipath and shadowing conditions. PMID:23367237
Lezama, José; Randall, Gregory; Morel, Jean-Michel; Grompone von Gioi, Rafael
2016-09-01
We propose a novel approach to the grouping of dot patterns by the good continuation law. Our model is based on local symmetries, and the non-accidentalness principle to determine perceptually relevant configurations. A quantitative measure of non-accidentalness is proposed, showing a good correlation with the visibility of a curve of dots. A robust, unsupervised and scale-invariant algorithm for the detection of good continuation of dots is derived. The results of the proposed method are illustrated on various datasets, including data from classic psychophysical studies. An online demonstration of the algorithm allows the reader to directly evaluate the method. PMID:26408332
Efficient integration method for fictitious domain approaches
NASA Astrophysics Data System (ADS)
Duczek, Sascha; Gabbert, Ulrich
2015-10-01
In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.
Studying geomagnetic pulsation characteristics with the local approximation method
NASA Astrophysics Data System (ADS)
Getmanov, V. G.; Dabagyan, R. A.; Sidorov, R. V.
2016-03-01
A local approximation method based on piecewise sinusoidal models has been proposed in order to study the frequency and amplitude characteristics of geomagnetic pulsations registered at a network of magnetic observatories. It has been established that synchronous variations in the geomagnetic pulsation frequency in the specified frequency band can be studied with the use of calculations performed according to this method. The method was used to analyze the spectral-time structure of Pc3 geomagnetic pulsations registered at the network of equatorial observatories. Local approximation variants have been formed for single-channel and multichannel cases of estimating the geomagnetic pulsation frequency and amplitude, which made it possible to decrease estimation errors via filtering with moving weighted averaging.
A stabilized, symmetric Nitsche method for spatially localized plasticity
NASA Astrophysics Data System (ADS)
Truster, Timothy J.
2016-01-01
A heterogeneous interface method is developed for combining primal displacement and mixed displacement-pressure formulations across nonconforming finite element meshes to treat volume-preserving plastic flow. When the zone of inelastic response is localized within a larger domain, significant computational savings can be achieved by confining the mixed formulation solely to the localized region. The method's distinguishing feature is that the coupling terms for joining dissimilar element types are derived from a time-discrete free energy functional, which is based on a Lagrange multiplier formulation of the interface constraints. Incorporating residual-based stabilizing terms at the interface enables the condensation of the multiplier field, leading to a symmetric Nitsche formulation in which the interface operators respect the differing character of the governing equations in each region. In a series of numerical problems, the heterogeneous interface method achieved comparable results on coarser meshes as those obtained from applying the mixed formulation throughout the domain.
Think Locally: A Prudent Approach to Electronic Resource Management Systems
ERIC Educational Resources Information Center
Gustafson-Sundell, Nat
2011-01-01
A few articles have drawn some amount of attention specifically to the local causes of the success or failure of electronic resource management system (ERMS) implementations. In fact, it seems clear that local conditions will largely determine whether any given ERMS implementation will succeed or fail. This statement might seem obvious, but the…
An Approach to Training for Human Resources in Local Government
ERIC Educational Resources Information Center
Brian, John D.
1975-01-01
The author cites 21 factors that have inhibited the development of effective human resources management strategies in local government over the years, and discusses two types of programs on the role and nature of personnel management and manpower development, especially as they relate to local government. (Author/BP)
A localized meshless method for diffusion on folded surfaces
NASA Astrophysics Data System (ADS)
Cheung, Ka Chun; Ling, Leevan; Ruuth, Steven J.
2015-09-01
Partial differential equations (PDEs) on surfaces arise in a variety of application areas including biological systems, medical imaging, fluid dynamics, mathematical physics, image processing and computer graphics. In this paper, we propose a radial basis function (RBF) discretization of the closest point method. The corresponding localized meshless method may be used to approximate diffusion on smooth or folded surfaces. Our method has the benefit of having an a priori error bound in terms of percentage of the norm of the solution. A stable solver is used to avoid the ill-conditioning that arises when the radial basis functions (RBFs) become flat.
Multiple Shooting-Local Linearization method for the identification of dynamical systems
NASA Astrophysics Data System (ADS)
Carbonell, F.; Iturria-Medina, Y.; Jimenez, J. C.
2016-08-01
The combination of the multiple shooting strategy with the generalized Gauss-Newton algorithm turns out in a recognized method for estimating parameters in ordinary differential equations (ODEs) from noisy discrete observations. A key issue for an efficient implementation of this method is the accurate integration of the ODE and the evaluation of the derivatives involved in the optimization algorithm. In this paper, we study the feasibility of the Local Linearization (LL) approach for the simultaneous numerical integration of the ODE and the evaluation of such derivatives. This integration approach results in a stable method for the accurate approximation of the derivatives with no more computational cost than that involved in the integration of the ODE. The numerical simulations show that the proposed Multiple Shooting-Local Linearization method recovers the true parameters value under different scenarios of noisy data.
Speeding up local correlation methods: System-inherent domains
NASA Astrophysics Data System (ADS)
Kats, Daniel
2016-07-01
A new approach to determine local virtual space in correlated calculations is presented. It restricts the virtual space in a pair-specific manner on the basis of a preceding approximate calculation adapting automatically to the locality of the studied problem. The resulting pair system-inherent domains are considerably smaller than the starting domains, without significant loss in the accuracy. Utilization of such domains speeds up integral transformations and evaluations of the residual and reduces memory requirements. The system-inherent domains are especially suitable in cases which require high accuracy, e.g., in generation of pair-natural orbitals, or for which standard domains are problematic, e.g., excited-state calculations.
Speeding up local correlation methods: System-inherent domains.
Kats, Daniel
2016-07-01
A new approach to determine local virtual space in correlated calculations is presented. It restricts the virtual space in a pair-specific manner on the basis of a preceding approximate calculation adapting automatically to the locality of the studied problem. The resulting pair system-inherent domains are considerably smaller than the starting domains, without significant loss in the accuracy. Utilization of such domains speeds up integral transformations and evaluations of the residual and reduces memory requirements. The system-inherent domains are especially suitable in cases which require high accuracy, e.g., in generation of pair-natural orbitals, or for which standard domains are problematic, e.g., excited-state calculations. PMID:27394095
Methods of localization of Lamb wave sources on thin plates
NASA Astrophysics Data System (ADS)
Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut
2015-04-01
Signal localization techniques are ubiquitous in both industry and academic communities. We propose a new localization method on plates which is based on energy amplitude attenuation and inverted source amplitude comparison. This inversion is tested on synthetic data using Lamb wave propagation direct model and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers (1-26 kHz frequency range)). We compare the performance of the technique to the classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. Furthermore, we measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, geometry, Signal to Noise Ratio, and we show that this very versatile technique works better than classical ones over the sampling rates 100kHz - 1MHz. Experimental phase consists of a glass plate having dimensions of 80cmx40cm with a thickness of 1cm. Generated signals due to a wooden hammer hit or a steel ball hit are captured by sensors placed on the plate on different locations with the mentioned sensors. Numerical simulations are done using dispersive far field approximation of plate waves. Signals are generated using a hertzian loading over the plate. Using imaginary sources outside the plate boundaries the effect of reflections is also included. This proposed method, can be modified to be implemented on 3d environments, monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).
Implementation of the locally renormalized CCSD(T) approaches for arbitrary reference function.
Kowalski, Karol
2005-07-01
Several new variants of the locally-renormalized coupled-cluster (CC) approaches that account for the effect of triples (LR-CCSD(T)) have been formulated and implemented for arbitrary reference states using the TENSOR CONTRACTION ENGINE functionality, enabling the automatic generation of an efficient parallel code. Deeply rooted in the recently derived numerator-denominator-connected (NDC) expansion for the ground-state energy [K. Kowalski and P. Piecuch, J. Chem. Phys. 122, 074107 (2005)], LR-CCSD(T) approximations use, in analogy to the completely renormalized CCSD(T) (CR-CCSD(T)) approach, the three-body moments in constructing the noniterative corrections to the energies obtained in CC calculations with singles and doubles (CCSD). In contrast to the CR-CCSD(T) method, the LR-CCSD(T) approaches discussed in this paper employ local denominators, which assure the additive separability of the energies in the noninteracting system limit when the localized occupied spin-orbitals are employed in the CCSD and LR-CCSD(T) calculations. As clearly demonstrated on several challenging examples, including breaking the bonds of the F2, N2, and CN molecules, the LR-CCSD(T) approaches are capable of providing a highly accurate description of the entire potential-energy surface (PES), while maintaining the characteristic N(7) scaling of the ubiquitous CCSD(T) approach. Moreover, as illustrated numerically for the ozone molecule, the LR-CCSD(T) approaches yield highly competitive values for a number of equilibrium properties including bond lengths, angles, and harmonic frequencies. PMID:16035828
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
Method of Deployment of a Space Tethered System Aligned to the Local Vertical
NASA Astrophysics Data System (ADS)
Zakrzhevskii, A. E.
2016-09-01
The object of this research is a space tether of two bodies connected by a flexible massless string. The research objective is the development and theoretical justification of a novel approach to the solution of the problem of deployment of the space tether in a circular orbit with its alignment to the local vertical. The approach is based on use of the theorem on the angular momentum change. It allows developing the open-loop control of the tether length that provides desired change of the angular momentum of the tether under the effect of the gravitational torque to the value, which corresponds to the angular momentum of the deployed tether aligned to the local vertical. The given example of application of the approach to a case of deployment of a tether demonstrates the simplicity of use of the method in practice, and also the method of validation of the mathematical model.
Method of Deployment of a Space Tethered System Aligned to the Local Vertical
NASA Astrophysics Data System (ADS)
Zakrzhevskii, A. E.
2016-04-01
The object of this research is a space tether of two bodies connected by a flexible massless string. The research objective is the development and theoretical justification of a novel approach to the solution of the problem of deployment of the space tether in a circular orbit with its alignment to the local vertical. The approach is based on use of the theorem on the angular momentum change. It allows developing the open-loop control of the tether length that provides desired change of the angular momentum of the tether under the effect of the gravitational torque to the value, which corresponds to the angular momentum of the deployed tether aligned to the local vertical. The given example of application of the approach to a case of deployment of a tether demonstrates the simplicity of use of the method in practice, and also the method of validation of the mathematical model.
NASA Astrophysics Data System (ADS)
Bakkali, M.; Davies, M.; Steadman, J. P.
2012-04-01
We currently have an incomplete understanding of how weather varies across London and how the city's microclimate will intensify levels of heat, cold and air pollution in the future. There is a need to target priority areas of the city and to promote design guidance on climate change mitigation strategies. As a result of improvements in the accuracy of local weather data in London, an opportunity is emerging for designers and planners of the built environment to measure the impact of their designs on local urban climate and to enhance the designer's role in creating more informed design choices at an urban micro-scale. However, modelling the different components of the urban environment separately and then collating and comparing the results invariably leads to discrepancies in the output of local urban climate modelling tools designed to work at different scales. Of particular interest is why marked differences appear between the data extracted from local urban climate models when we change the scale of modelling from city to building scale. An example of such differences is those that have been observed in relation to the London Unified Model and London Site Specific Air Temperature model. In order to avoid these discrepancies we need a method for understanding and assessing how the urban environment impacts on local urban climate as a whole. A step to achieving this is by developing inter-linkages between assessment tools. Accurate information on the net impact of the urban environment on the local urban climate will in turn facilitate more accurate predictions of future energy demand and realistic scenarios for comfort and health. This paper will present two key topographies of London's urban environment that influence local urban climate: land use and street canyons. It will look at the possibilities for developing an integrated approach to modelling London's local urban climate from the neighbourhood to the street scale.
An efficient linear-scaling CCSD(T) method based on local natural orbitals
NASA Astrophysics Data System (ADS)
Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály
2013-09-01
An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)], 10.1063/1.3632085 and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)], 10.1063/1.3218842 with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Møller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.
A structural alphabet for local protein structures: improved prediction methods.
Etchebest, Catherine; Benros, Cristina; Hazout, Serge; de Brevern, Alexandre G
2005-06-01
Three-dimensional protein structures can be described with a library of 3D fragments that define a structural alphabet. We have previously proposed such an alphabet, composed of 16 patterns of five consecutive amino acids, called Protein Blocks (PBs). These PBs have been used to describe protein backbones and to predict local structures from protein sequences. The Q16 prediction rate reaches 40.7% with an optimization procedure. This article examines two aspects of PBs. First, we determine the effect of the enlargement of databanks on their definition. The results show that the geometrical features of the different PBs are preserved (local RMSD value equal to 0.41 A on average) and sequence-structure specificities reinforced when databanks are enlarged. Second, we improve the methods for optimizing PB predictions from sequences, revisiting the optimization procedure and exploring different local prediction strategies. Use of a statistical optimization procedure for the sequence-local structure relation improves prediction accuracy by 8% (Q16 = 48.7%). Better recognition of repetitive structures occurs without losing the prediction efficiency of the other local folds. Adding secondary structure prediction improved the accuracy of Q16 by only 1%. An entropy index (Neq), strongly related to the RMSD value of the difference between predicted PBs and true local structures, is proposed to estimate prediction quality. The Neq is linearly correlated with the Q16 prediction rate distributions, computed for a large set of proteins. An "expected" prediction rate QE16 is deduced with a mean error of 5%. PMID:15822101
Russian risk assessment methods and approaches
Dvorack, M.A.; Carlson, D.D.; Smith, R.E.
1996-07-01
One of the benefits resulting from the collapse of the Soviet Union is the increased dialogue currently taking place between American and Russian nuclear weapons scientists in various technical arenas. One of these arenas currently being investigated involves collaborative studies which illustrate how risk assessment is perceived and utilized in the Former Soviet Union (FSU). The collaborative studies indicate that, while similarities exist with respect to some methodologies, the assumptions and approaches in performing risk assessments were, and still are, somewhat different in the FSU as opposed to that in the US. The purpose of this paper is to highlight the present knowledge of risk assessment methodologies and philosophies within the two largest nuclear weapons laboratories of the Former Soviet Union, Arzamas-16 and Chelyabinsk-70. Furthermore, This paper will address the relative progress of new risk assessment methodologies, such as Fuzzy Logic, within the framework of current risk assessment methods at these two institutes.
Liu, Lili; Zhang, Zijun; Mei, Qian; Chen, Ming
2013-01-01
Predicting the subcellular localization of proteins conquers the major drawbacks of high-throughput localization experiments that are costly and time-consuming. However, current subcellular localization predictors are limited in scope and accuracy. In particular, most predictors perform well on certain locations or with certain data sets while poorly on others. Here, we present PSI, a novel high accuracy web server for plant subcellular localization prediction. PSI derives the wisdom of multiple specialized predictors via a joint-approach of group decision making strategy and machine learning methods to give an integrated best result. The overall accuracy obtained (up to 93.4%) was higher than best individual (CELLO) by ~10.7%. The precision of each predicable subcellular location (more than 80%) far exceeds that of the individual predictors. It can also deal with multi-localization proteins. PSI is expected to be a powerful tool in protein location engineering as well as in plant sciences, while the strategy employed could be applied to other integrative problems. A user-friendly web server, PSI, has been developed for free access at http://bis.zju.edu.cn/psi/. PMID:24194827
System and method for bullet tracking and shooter localization
Roberts, Randy S.; Breitfeller, Eric F.
2011-06-21
A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.
NASA Astrophysics Data System (ADS)
Tao, Wang; Dongying, Wang; Yu, Pei; Wei, Fan
2015-09-01
To resolve the measured target position to determine and locate leak problems with current gas leak detection and localization systems based on ultrasonic technology, this paper presents an improved multi-array ultrasonic gas leak TDOA (time difference of arrival) localization and detection method. This method involves arranging ultrasonic transducers at equal intervals in a high-sensitivity detector array, using small differences in ultrasonic sound intensity to determine the scope of the leak and generate a rough localization, and then using an array TDOA localization algorithm to determine the precise leak location. This method is then implemented in an ultrasonic leak detection and localization system. Experimental results showed that the TDOA localization method, using auxiliary sound intensity factors to avoid dependence on a single sound intensity to determine the leak size and location, achieved a localization error of less than 2 mm. The validity and correctness of this approach were thus verified.
In vitro bioequivalence approach for a locally acting gastrointestinal drug: lanthanum carbonate.
Yang, Yongsheng; Shah, Rakhi B; Yu, Lawrence X; Khan, Mansoor A
2013-02-01
A conventional human pharmacokinetic (PK) in vivo study is often considered as the "gold standard" to determine bioequivalence (BE) of drug products. However, this BE approach is not always applicable to the products not intended to be delivered into the systemic circulation. For locally acting gastrointestinal (GI) products, well designed in vitro approaches might be more practical in that they are able not only to qualitatively predict the presence of the active substance at the site of action but also to specifically assess the performance of the active substance. For example, lanthanum carbonate chewable tablet, a locally acting GI phosphate binder when orally administrated, can release free lanthanum ions in the acid environment of the upper GI tract. The lanthanum ions directly reach the site of action to bind with dietary phosphate released from food to form highly insoluble lanthanum-phosphate complexes. This prevents the absorption of phosphate consequently reducing the serum phosphate. Thus, using a conventional PK approach to demonstrate BE is meaningless since plasma levels are not relevant for local efficacy in the GI tract. Additionally the bioavailability of lanthanum carbonate is less than 0.002%, and therefore, the PK approach is not feasible. Therefore, an alternative assessment method is required. This paper presents an in vitro approach that can be used in lieu of PK or clinical studies to determine the BE of lanthanum carbonate chewable tablets. It is hoped that this information can be used to finalize an in vitro guidance for BE studies of lanthanum carbonate chewable tablets as well as to assist with "in vivo" biowaiver decision making. The scientific information might be useful to the pharmaceutical industry for the purpose of planning and designing future BE studies. PMID:23249191
A novel method to compare protein structures using local descriptors
2011-01-01
Background Protein structure comparison is one of the most widely performed tasks in bioinformatics. However, currently used methods have problems with the so-called "difficult similarities", including considerable shifts and distortions of structure, sequential swaps and circular permutations. There is a demand for efficient and automated systems capable of overcoming these difficulties, which may lead to the discovery of previously unknown structural relationships. Results We present a novel method for protein structure comparison based on the formalism of local descriptors of protein structure - DEscriptor Defined Alignment (DEDAL). Local similarities identified by pairs of similar descriptors are extended into global structural alignments. We demonstrate the method's capability by aligning structures in difficult benchmark sets: curated alignments in the SISYPHUS database, as well as SISY and RIPC sets, including non-sequential and non-rigid-body alignments. On the most difficult RIPC set of sequence alignment pairs the method achieves an accuracy of 77% (the second best method tested achieves 60% accuracy). Conclusions DEDAL is fast enough to be used in whole proteome applications, and by lowering the threshold of detectable structure similarity it may shed additional light on molecular evolution processes. It is well suited to improving automatic classification of structure domains, helping analyze protein fold space, or to improving protein classification schemes. DEDAL is available online at http://bioexploratorium.pl/EP/DEDAL. PMID:21849047
Ball, Nicholas; Cagen, Stuart; Carrillo, Juan-Carlos; Certa, Hans; Eigler, Dorothea; Emter, Roger; Faulhammer, Frank; Garcia, Christine; Graham, Cynthia; Haux, Carl; Kolle, Susanne N; Kreiling, Reinhard; Natsch, Andreas; Mehling, Annette
2011-08-01
An integral part of hazard and safety assessments is the estimation of a chemical's potential to cause skin sensitization. Currently, only animal tests (OECD 406 and 429) are accepted in a regulatory context. Nonanimal test methods are being developed and formally validated. In order to gain more insight into the responses induced by eight exemplary surfactants, a battery of in vivo and in vitro tests were conducted using the same batch of chemicals. In general, the surfactants were negative in the GPMT, KeratinoSens and hCLAT assays and none formed covalent adducts with test peptides. In contrast, all but one was positive in the LLNA. Most were rated as being irritants by the EpiSkin assay with the additional endpoint, IL1-alpha. The weight of evidence based on this comprehensive testing indicates that, with one exception, they are non-sensitizing skin irritants, confirming that the LLNA tends to overestimate the sensitization potential of surfactants. As results obtained from LLNAs are considered as the gold standard for the development of new nonanimal alternative test methods, results such as these highlight the necessity to carefully evaluate the applicability domains of test methods in order to develop reliable nonanimal alternative testing strategies for sensitization testing. PMID:21645576
A modified Monte Carlo 'local importance function transform' method
Keady, K. P.; Larsen, E. W.
2013-07-01
The Local Importance Function Transform (LIFT) method uses an approximation of the contribution transport problem to bias a forward Monte-Carlo (MC) source-detector simulation [1-3]. Local (cell-based) biasing parameters are calculated from an inexpensive deterministic adjoint solution and used to modify the physics of the forward transport simulation. In this research, we have developed a new expression for the LIFT biasing parameter, which depends on a cell-average adjoint current to scalar flux (J{sup *}/{phi}{sup *}) ratio. This biasing parameter differs significantly from the original expression, which uses adjoint cell-edge scalar fluxes to construct a finite difference estimate of the flux derivative; the resulting biasing parameters exhibit spikes in magnitude at material discontinuities, causing the original LIFT method to lose efficiency in problems with high spatial heterogeneity. The new J{sup *}/{phi}{sup *} expression, while more expensive to obtain, generates biasing parameters that vary smoothly across the spatial domain. The result is an improvement in simulation efficiency. A representative test problem has been developed and analyzed to demonstrate the advantage of the updated biasing parameter expression with regards to solution figure of merit (FOM). For reference, the two variants of the LIFT method are compared to a similar variance reduction method developed by Depinay [4, 5], as well as MC with deterministic adjoint weight windows (WW). (authors)
A global/local analysis method for treating details in structural design
NASA Technical Reports Server (NTRS)
Aminpour, Mohammad A.; Mccleary, Susan L.; Ransom, Jonathan B.
1993-01-01
A method for analyzing global/local behavior of plate and shell structures is described. In this approach, a detailed finite element model of the local region is incorporated within a coarser global finite element model. The local model need not be nodally compatible (i.e., need not have a one-to-one nodal correspondence) with the global model at their common boundary; therefore, the two models may be constructed independently. The nodal incompatibility of the models is accounted for by introducing appropriate constraint conditions into the potential energy in a hybrid variational formulation. The primary advantage of this method is that the need for transition modeling between global and local models is eliminated. Eliminating transition modeling has two benefits. First, modeling efforts are reduced since tedious and complex transitioning need not be performed. Second, errors due to the mesh distortion, often unavoidable in mesh transitioning, are minimized by avoiding distorted elements beyond what is needed to represent the geometry of the component. The method is applied reduced to a plate loaded in tension and transverse bending. The plate has a central hole, and various hole sixes and shapes are studied. The method is also applied to a composite laminated fuselage panel with a crack emanating from a window in the panel. While this method is applied herein to global/local problems, it is also applicable to the coupled analysis of independently modeled components as well as adaptive refinement.
Local Authority Approaches to the School Admissions Process. LG Group Research Report
ERIC Educational Resources Information Center
Rudd, Peter; Gardiner, Clare; Marson-Smith, Helen
2010-01-01
What are the challenges, barriers and facilitating factors connected to the various school admissions approaches used by local authorities? This report gathers the views of local authority admissions officers on the strengths and weaknesses of different approaches, as well as the issues and challenges they face in this important area. It covers:…
Application of advanced reliability methods to local strain fatigue analysis
NASA Technical Reports Server (NTRS)
Wu, T. T.; Wirsching, P. H.
1983-01-01
When design factors are considered as random variables and the failure condition cannot be expressed by a closed form algebraic inequality, computations of risk (or probability of failure) might become extremely difficult or very inefficient. This study suggests using a simple, and easily constructed, second degree polynomial to approximate the complicated limit state in the neighborhood of the design point; a computer analysis relates the design variables at selected points. Then a fast probability integration technique (i.e., the Rackwitz-Fiessler algorithm) can be used to estimate risk. The capability of the proposed method is demonstrated in an example of a low cycle fatigue problem for which a computer analysis is required to perform local strain analysis to relate the design variables. A comparison of the performance of this method is made with a far more costly Monte Carlo solution. Agreement of the proposed method with Monte Carlo is considered to be good.
Assessing a novel approach for predicting local 3D protein structures from sequence.
Benros, Cristina; de Brevern, Alexandre G; Etchebest, Catherine; Hazout, Serge
2006-03-01
We developed a novel approach for predicting local protein structure from sequence. It relies on the Hybrid Protein Model (HPM), an unsupervised clustering method we previously developed. This model learns three-dimensional protein fragments encoded into a structural alphabet of 16 protein blocks (PBs). Here, we focused on 11-residue fragments encoded as a series of seven PBs and used HPM to cluster them according to their local similarities. We thus built a library of 120 overlapping prototypes (mean fragments from each cluster), with good three-dimensional local approximation, i.e., a mean accuracy of 1.61 A Calpha root-mean-square distance. Our prediction method is intended to optimize the exploitation of the sequence-structure relations deduced from this library of long protein fragments. This was achieved by setting up a system of 120 experts, each defined by logistic regression to optimize the discrimination from sequence of a given prototype relative to the others. For a target sequence window, the experts computed probabilities of sequence-structure compatibility for the prototypes and ranked them, proposing the top scorers as structural candidates. Predictions were defined as successful when a prototype <2.5 A from the true local structure was found among those proposed. Our strategy yielded a prediction rate of 51.2% for an average of 4.2 candidates per sequence window. We also proposed a confidence index to estimate prediction quality. Our approach predicts from sequence alone and will thus provide valuable information for proteins without structural homologs. Candidates will also contribute to global structure prediction by fragment assembly. PMID:16385557
NASA Astrophysics Data System (ADS)
Grech, Dariusz
We define and confront global and local methods to analyze the financial crash-like events on the financial markets from the critical phenomena point of view. These methods are based respectively on the analysis of log-periodicity and on the local fractal properties of financial time series in the vicinity of phase transitions (crashes). The log-periodicity analysis is made in a daily time horizon, for the whole history (1991-2008) of Warsaw Stock Exchange Index (WIG) connected with the largest developing financial market in Europe. We find that crash-like events on the Polish financial market are described better by the log-divergent price model decorated with log-periodic behavior than by the power-law-divergent price model usually discussed in log-periodic scenarios for developed markets. Predictions coming from log-periodicity scenario are verified for all main crashes that took place in WIG history. It is argued that crash predictions within log-periodicity model strongly depend on the amount of data taken to make a fit and therefore are likely to contain huge inaccuracies. Next, this global analysis is confronted with the local fractal description. To do so, we provide calculation of the so-called local (time dependent) Hurst exponent H loc for the WIG time series and for main US stock market indices like DJIA and S&P 500. We point out dependence between the behavior of the local fractal properties of financial time series and the crashes appearance on the financial markets. We conclude that local fractal method seems to work better than the global approach - both for developing and developed markets. The very recent situation on the market, particularly related to the Fed intervention in September 2007 and the situation immediately afterwards is also analyzed within fractal approach. It is shown in this context how the financial market evolves through different phases of fractional Brownian motion. Finally, the current situation on American market is
An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
1999-01-01
An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.
A graph-based approach for local and global panorama imaging in cystoscopy
NASA Astrophysics Data System (ADS)
Bergen, Tobias; Wittenberg, Thomas; Münzenmayer, Christian; Chen, Chi Chiung Grace; Hager, Gregory D.
2013-03-01
Inspection of the urinary bladder with an endoscope (cystoscope) is the usual procedure for early detection of bladder cancer. The very limited field of view provided by the endoscope makes it challenging to ensure, that the interior bladder wall has been examined completely. Panorama imaging techniques can be used to assist the surgeon and provide a larger view field. Different approaches have been proposed, but generating a panorama image of the entire bladder from real patient data is still a challenging research topic. We propose a graph-based and hierarchical approach to assess this problem to first generate several local panorama images, followed by a global textured three-dimensional reconstruction of the organ. In this contribution, we address details of the first level of the approach including a graph-based algorithm to deal with the challenging condition of in-vivo data. This graph strategy gives rise to a robust relocalization strategy in case of tracking failure, an effective keyframe selection process as well as the concept of building locally optimized sub-maps, which lay the ground for a global optimization process. Our results show the successful application of the method to four in-vivo data sets.
Darabant, András; Rai, Prem Bahadur; Staudhammer, Christina Lynn; Dorji, Tshewang
2016-08-01
Dendrocalamus hamiltonii, a large, clump-forming bamboo, has great potential to contribute towards poverty alleviation efforts across its distributional range. Harvesting methods that maximize yield while they fulfill local objectives and ensure sustainability are a research priority. Documenting local ecological knowledge on the species and identifying local users' goals for its production, we defined three harvesting treatments (selective cut, horseshoe cut, clear cut) and experimentally compared them with a no-intervention control treatment in an action research framework. We implemented harvesting over three seasons and monitored annually and two years post-treatment. Even though the total number of culms positively influenced the number of shoots regenerated, a much stronger relationship was detected between the number of culms harvested and the number of shoots regenerated, indicating compensatory growth mechanisms to guide shoot regeneration. Shoot recruitment declined over time in all treatments as well as the control; however, there was no difference among harvest treatments. Culm recruitment declined with an increase in harvesting intensity. When univariately assessing the number of harvested culms and shoots, there were no differences among treatments. However, multivariate analyses simultaneously considering both variables showed that harvested output of shoots and culms was higher with clear cut and horseshoe cut as compared to selective cut. Given the ease of implementation and issues of work safety, users preferred the horseshoe cut, but the lack of sustainability of shoot production calls for investigating longer cutting cycles. PMID:27113084
NASA Astrophysics Data System (ADS)
Darabant, András; Rai, Prem Bahadur; Staudhammer, Christina Lynn; Dorji, Tshewang
2016-08-01
Dendrocalamus hamiltonii, a large, clump-forming bamboo, has great potential to contribute towards poverty alleviation efforts across its distributional range. Harvesting methods that maximize yield while they fulfill local objectives and ensure sustainability are a research priority. Documenting local ecological knowledge on the species and identifying local users' goals for its production, we defined three harvesting treatments (selective cut, horseshoe cut, clear cut) and experimentally compared them with a no-intervention control treatment in an action research framework. We implemented harvesting over three seasons and monitored annually and two years post-treatment. Even though the total number of culms positively influenced the number of shoots regenerated, a much stronger relationship was detected between the number of culms harvested and the number of shoots regenerated, indicating compensatory growth mechanisms to guide shoot regeneration. Shoot recruitment declined over time in all treatments as well as the control; however, there was no difference among harvest treatments. Culm recruitment declined with an increase in harvesting intensity. When univariately assessing the number of harvested culms and shoots, there were no differences among treatments. However, multivariate analyses simultaneously considering both variables showed that harvested output of shoots and culms was higher with clear cut and horseshoe cut as compared to selective cut. Given the ease of implementation and issues of work safety, users preferred the horseshoe cut, but the lack of sustainability of shoot production calls for investigating longer cutting cycles.
Method for localizing and isolating an errant process step
Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.
2003-01-01
A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.
Validating a local Arterial Input Function method for improved perfusion quantification in stroke
Willats, Lisa; Christensen, Soren; K Ma, Henry; A Donnan, Geoffrey; Connelly, Alan; Calamante, Fernando
2011-01-01
In bolus-tracking perfusion magnetic resonance imaging (MRI), temporal dispersion of the contrast bolus due to stenosis or collateral supply presents a significant problem for accurate perfusion quantification in stroke. One means to reduce the associated perfusion errors is to deconvolve the bolus concentration time-course data with local Arterial Input Functions (AIFs) measured close to the capillary bed and downstream of the arterial abnormalities causing dispersion. Because the MRI voxel resolution precludes direct local AIF measurements, they must be extrapolated from the surrounding data. To date, there have been no published studies directly validating these local AIFs. We assess the effectiveness of local AIFs in reducing dispersion-induced perfusion error by measuring the residual dispersion remaining in the local AIF deconvolved perfusion maps. Two approaches to locating the local AIF voxels are assessed and compared with a global AIF deconvolution across 19 bolus-tracking data sets from patients with stroke. The local AIF methods reduced dispersion in the majority of data sets, suggesting more accurate perfusion quantification. Importantly, the validation inherently identifies potential areas for perfusion underestimation. This is valuable information for the identification of at-risk tissue and management of stroke patients. PMID:21629260
Optimizing Local Memory Allocation and Assignment through a Decoupled Approach
NASA Astrophysics Data System (ADS)
Diouf, Boubacar; Ozturk, Ozcan; Cohen, Albert
Software-controlled local memories (LMs) are widely used to provide fast, scalable, power efficient and predictable access to critical data. While many studies addressed LM management, keeping hot data in the LM continues to cause major headache. This paper revisits LM management of arrays in light of recent progresses in register allocation, supporting multiple live-range splitting schemes through a generic integer linear program. These schemes differ in the grain of decision points. The model can also be extended to address fragmentation, assigning live ranges to precise offsets. We show that the links between LM management and register allocation have been underexploited, leaving much fundamental questions open and effective applications to be explored.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Fernàndez-Garcia, Daniel
2013-09-01
Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .
Planning and visualization methods for effective bronchoscopic target localization
NASA Astrophysics Data System (ADS)
Gibbs, Jason D.; Taeprasarsit, Pinyo; Higgins, William E.
2012-02-01
Bronchoscopic biopsy of lymph nodes is an important step in staging lung cancer. Lymph nodes, however, lie behind the airway walls and are near large vascular structures - all of these structures are hidden from the bronchoscope's field of view. Previously, we had presented a computer-based virtual bronchoscopic navigation system that provides reliable guidance for bronchoscopic sampling. While this system offers a major improvement over standard practice, bronchoscopists told us that target localization- lining up the bronchoscope before deploying a needle into the target - can still be challenging. We therefore address target localization in two distinct ways: (1) automatic computation of an optimal diagnostic sampling pose for safe, effective biopsies, and (2) a novel visualization of the target and surrounding major vasculature. The planning determines the final pose for the bronchoscope such that the needle, when extended from the tip, maximizes the tissue extracted. This automatically calculated local pose orientation is conveyed in endoluminal renderings by a 3D arrow. Additional visual cues convey obstacle locations and target depths-of-sample from arbitrary instantaneous viewing orientations. With the system, a physician can freely navigate in the virtual bronchoscopic world perceiving the depth-of-sample and possible obstacle locations at any endoluminal pose, not just one pre-determined optimal pose. We validated the system using mediastinal lymph nodes in eleven patients. The system successfully planned for 20 separate targets in human MDCT scans. In particular, given the patient and bronchoscope constraints, our method found that safe, effective biopsies were feasible in 16 of the 20 targets; the four remaining targets required more aggressive safety margins than a "typical" target. In all cases, planning computation took only a few seconds, while the visualizations updated in real time during bronchoscopic navigation.
2016-01-01
Objective: There is evidence of substantial subnational variation in the HIV epidemic. However, robust spatial HIV data are often only available at high levels of geographic aggregation and not at the finer resolution needed for decision making. Therefore, spatial analysis methods that leverage available data to provide local estimates of HIV prevalence may be useful. Such methods exist but have not been formally compared when applied to HIV. Design/methods: Six candidate methods – including those used by the Joint United Nations Programme on HIV/AIDS to generate maps and a Bayesian geostatistical approach applied to other diseases – were used to generate maps and subnational estimates of HIV prevalence across three countries using cluster level data from household surveys. Two approaches were used to assess the accuracy of predictions: internal validation, whereby a proportion of input data is held back (test dataset) to challenge predictions; and comparison with location-specific data from household surveys in earlier years. Results: Each of the methods can generate usefully accurate predictions of prevalence at unsampled locations, with the magnitude of the error in predictions similar across approaches. However, the Bayesian geostatistical approach consistently gave marginally the strongest statistical performance across countries and validation procedures. Conclusions: Available methods may be able to furnish estimates of HIV prevalence at finer spatial scales than the data currently allow. The subnational variation revealed can be integrated into planning to ensure responsiveness to the spatial features of the epidemic. The Bayesian geostatistical approach is a promising strategy for integrating HIV data to generate robust local estimates. PMID:26919737
Cryo-Balloon Catheter Localization Based on a Support-Vector-Machine Approach.
Kurzendorfer, Tanja; Mewes, Philip W; Maier, Andreas; Strobel, Norbert; Brost, Alexander
2016-08-01
Cryo-balloon catheters have attracted an increasing amount of interest in the medical community as they can reduce patient risk during left atrial pulmonary vein ablation procedures. As cryo-balloon catheters are not equipped with electrodes, they cannot be localized automatically by electro-anatomical mapping systems. As a consequence, X-ray fluoroscopy has remained an important means for guidance during the procedure. Most recently, image guidance methods for fluoroscopy-based procedures have been proposed, but they provide only limited support for cryo-balloon catheters and require significant user interaction. To improve this situation, we propose a novel method for automatic cryo-balloon catheter detection in fluoroscopic images by detecting the cryo-balloon catheter's built-in X-ray marker. Our approach is based on a blob detection algorithm to find possible X-ray marker candidates. Several of these candidates are then excluded using prior knowledge. For the remaining candidates, several catheter specific features are introduced. They are processed using a machine learning approach to arrive at the final X-ray marker position. Our method was evaluated on 75 biplane fluoroscopy images from 40 patients, from two sites, acquired with a biplane angiography system. The method yielded a success rate of 99.0% in plane A and 90.6% in plane B, respectively. The detection achieved an accuracy of 1.00 mm±0.82 mm in plane A and 1.13 mm±0.24 mm in plane B. The localization in 3-D was associated with an average error of 0.36 mm±0.86 mm. PMID:26978663
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
Spatial and Spectral Methods for Weed Detection and Localization
NASA Astrophysics Data System (ADS)
Vioix, Jean-Baptiste; Douzals, Jean-Paul; Truchetet, Frédéric; Assémat, Louis; Guillemin, Jean-Philippe
2002-12-01
This study concerns the detection and localization of weed patches in order to improve the knowledge on weed-crop competition. A remote control aircraft provided with a camera allowed to obtain low cost and repetitive information. Different processings were involved to detect weed patches using spatial then spectral methods. First, a shift of colorimetric base allowed to separate the soil and plant pixels. Then, a specific algorithm including Gabor filter was applied to detect crop rows on the vegetation image. Weed patches were then deduced from the comparison of vegetation and crop images. Finally, the development of a multispectral acquisition device is introduced. First results for the discrimination of weeds and crops using the spectral properties are shown from laboratory tests. Application of neural networks were mostly studied.
Evaluation of EEG localization methods using realistic simulations of interictal spikes.
Grova, C; Daunizeau, J; Lina, J-M; Bénar, C G; Benali, H; Gotman, J
2006-02-01
Performing an accurate localization of sources of interictal spikes from EEG scalp measurements is of particular interest during the presurgical investigation of epilepsy. The purpose of this paper is to study the ability of six distributed source localization methods to recover extended sources of activated cortex. Due to the frequent lack of a gold standard to evaluate source localization methods, our evaluation was performed in a controlled environment using realistic simulations of EEG interictal spikes, involving several anatomical locations with several spatial extents. Simulated data were corrupted by physiological EEG noise. Simulations involving pairs of sources with the same amplitude were also studied. In addition to standard validation criteria (e.g., geodesic distance or mean square error), we proposed an original criterion dedicated to assess detection accuracy, based on receiver operating characteristic (ROC) analysis. Six source localization methods were evaluated: the minimum norm, the minimum norm weighted by multivariate source prelocalization (MSP), cortical LORETA with or without additional minimum norm regularization, and two derivations of the maximum entropy on the mean (MEM) approach. Results showed that LORETA-based and MEM-based methods were able to accurately recover sources of different spatial extents, with the exception of sources in temporo-mesial and fronto-mesial regions. Several spurious sources were generated by those methods, however, whereas methods using the MSP always located very accurately the maximum of activity but not its spatial extent. These findings suggest that one should always take into account the results from different localization methods when analyzing real interictal spikes. PMID:16271483
Local Bathymetry Estimation Using Variational Inverse Modeling: A Nested Approach
NASA Astrophysics Data System (ADS)
Almeida, T. G.; Walker, D. T.; Farquharson, G.
2014-12-01
Estimation of subreach river bathymetry from remotely-sensed surface velocity data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs). A nested approach is adopted to focus on obtaining an accurate estimate of bathymetry over a small region of interest within a larger complex hydrodynamic system. This approach reduces computational cost significantly. We begin by constructing a minimization problem with a cost function defined by the error between observed and estimated surface velocities, and then apply the SWEs as a constraint on the velocity field. An adjoint SWE model is developed through the use of Lagrange multipliers, converting the unconstrained minimization problem into a constrained one. The adjoint model solution is used to calculate the gradient of the cost function with respect to bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In this application of the algorithm, the 2D depth-averaged flow is computed within a nested framework using Delft3D-FLOW as the forward computational model. First, an outer simulation is generated using discharge rate and other measurements from USGS and NOAA, assuming a uniform bottom-friction coefficient. Then a nested, higher resolution inner model is constructed using open boundary condition data interpolated from the outer model (see figure). Riemann boundary conditions with specified tangential velocities are utilized to ensure a near seamless transition between outer and inner model results. The initial guess bathymetry matches the outer model bathymetry, and the iterative assimilation procedure is used to adjust the bathymetry only for the inner model. The observation data was collected during the ONR Rivet II field exercise for the mouth of the Columbia River near Hammond, OR. A dual beam squinted along-track-interferometric, synthetic
A Challenging Surgical Approach to Locally Advanced Primary Urethral Carcinoma
Lucarelli, Giuseppe; Spilotros, Marco; Vavallo, Antonio; Palazzo, Silvano; Miacola, Carlos; Forte, Saverio; Matera, Matteo; Campagna, Marcello; Colamonico, Ottavio; Schiralli, Francesco; Sebastiani, Francesco; Di Cosmo, Federica; Bettocchi, Carlo; Di Lorenzo, Giuseppe; Buonerba, Carlo; Vincenti, Leonardo; Ludovico, Giuseppe; Ditonno, Pasquale; Battaglia, Michele
2016-01-01
Abstract Primary urethral carcinoma (PUC) is a rare and aggressive cancer, often underdetected and consequently unsatisfactorily treated. We report a case of advanced PUC, surgically treated with combined approaches. A 47-year-old man underwent transurethral resection of a urethral lesion with histological evidence of a poorly differentiated squamous cancer of the bulbomembranous urethra. Computed tomography (CT) and bone scans excluded metastatic spread of the disease but showed involvement of both corpora cavernosa (cT3N0M0). A radical surgical approach was advised, but the patient refused this and opted for chemotherapy. After 17 months the patient was referred to our department due to the evidence of a fistula in the scrotal area. CT scan showed bilateral metastatic disease in the inguinal, external iliac, and obturator lymph nodes as well as the involvement of both corpora cavernosa. Additionally, a fistula originating from the right corpus cavernosum extended to the scrotal skin. At this stage, the patient accepted the surgical treatment, consisting of different phases. Phase I: Radical extraperitoneal cystoprostatectomy with iliac-obturator lymph nodes dissection. Phase II: Creation of a urinary diversion through a Bricker ileal conduit. Phase III: Repositioning of the patient in lithotomic position for an overturned Y skin incision, total penectomy, fistula excision, and “en bloc” removal of surgical specimens including the bladder, through the perineal breach. Phase IV: Right inguinal lymphadenectomy. The procedure lasted 9-and-a-half hours, was complication-free, and intraoperative blood loss was 600 mL. The patient was discharged 8 days after surgery. Pathological examination documented a T4N2M0 tumor. The clinical situation was stable during the first 3 months postoperatively but then metastatic spread occurred, not responsive to adjuvant chemotherapy, which led to the patient's death 6 months after surgery. Patients with advanced stage tumors of
Qualitative Approaches to Mixed Methods Practice
ERIC Educational Resources Information Center
Hesse-Biber, Sharlene
2010-01-01
This article discusses how methodological practices can shape and limit how mixed methods is practiced and makes visible the current methodological assumptions embedded in mixed methods practice that can shut down a range of social inquiry. The article argues that there is a "methodological orthodoxy" in how mixed methods is practiced that…
Bayesian multiresolution method for local tomography in dental x-ray imaging.
Niinimäki, K; Siltanen, S; Kolehmainen, V
2007-11-21
Dental tomographic cone-beam x-ray imaging devices record truncated projections and reconstruct a region of interest (ROI) inside the head. Image reconstruction from the resulting local tomography data is an ill-posed inverse problem. A new Bayesian multiresolution method is proposed for local tomography reconstruction. The inverse problem is formulated in a well-posed statistical form where a prior model of the target tissues compensates for the incomplete x-ray projection data. Tissues are represented in a wavelet basis, and prior information is modeled in terms of a Besov norm penalty. The number of unknowns in the reconstruction problem is reduced by abandoning fine-scale wavelets outside the ROI. Compared to traditional voxel-based models, this multiresolution approach allows significant reduction of degrees of freedom without loss of accuracy inside the ROI, as shown by 2D examples using simulated and in vitro local tomography data. PMID:17975290
Architecture-Centric Methods and Agile Approaches
NASA Astrophysics Data System (ADS)
Babar, Muhammad Ali; Abrahamsson, Pekka
Agile software development approaches have had significant impact on industrial software development practices. Despite becoming widely popular, there is an increasing perplexity about the role and importance of a system’s software architecture in agile approaches [1, 2]. Advocates of the vital role of architecture in achieving quality goals of large-scale-software-intensive-systems are skeptics of the scalability of any development approach that does not pay sufficient attention to architectural issues. However, the proponents of agile approaches usually perceive the upfront design and evaluation of architecture as being of less value to the customers of a system. According to them, for example, re-factoring can help fix most of the problems. Many experiences show that large-scale re-factoring often results in significant defects, which are very costly to address later in the development cycle. It is considered that re-factoring is worthwhile as long as the high-level design is good enough to limit the need for large-scale re-factoring [1, 3, 4].
HYPLOSP: a knowledge-based approach to protein local structure prediction.
Chen, Ching-Tai; Lin, Hsin-Nan; Sung, Ting-Yi; Hsu, Wen-Lian
2006-12-01
Local structure prediction can facilitate ab initio structure prediction, protein threading, and remote homology detection. However, the accuracy of existing methods is limited. In this paper, we propose a knowledge-based prediction method that assigns a measure called the local match rate to each position of an amino acid sequence to estimate the confidence of our method. Empirically, the accuracy of the method correlates positively with the local match rate; therefore, we employ it to predict the local structures of positions with a high local match rate. For positions with a low local match rate, we propose a neural network prediction method. To better utilize the knowledge-based and neural network methods, we design a hybrid prediction method, HYPLOSP (HYbrid method to Protein LOcal Structure Prediction) that combines both methods. To evaluate the performance of the proposed methods, we first perform cross-validation experiments by applying our knowledge-based method, a neural network method, and HYPLOSP to a large dataset of 3,925 protein chains. We test our methods extensively on three different structural alphabets and evaluate their performance by two widely used criteria, Maximum Deviation of backbone torsion Angle (MDA) and Q(N), which is similar to Q(3) in secondary structure prediction. We then compare HYPLOSP with three previous studies using a dataset of 56 new protein chains. HYPLOSP shows promising results in terms of MDA and Q(N) accuracy and demonstrates its alphabet-independent capability. PMID:17245815
Periodic local MP2 method employing orbital specific virtuals
NASA Astrophysics Data System (ADS)
Usvyat, Denis; Maschio, Lorenzo; Schütz, Martin
2015-09-01
We introduce orbital specific virtuals (OSVs) to represent the truncated pair-specific virtual space in periodic local Møller-Plesset perturbation theory of second order (LMP2). The OSVs are constructed by diagonalization of the LMP2 amplitude matrices which correspond to diagonal Wannier-function (WF) pairs. Only a subset of these OSVs is adopted for the subsequent OSV-LMP2 calculation, namely, those with largest contribution to the diagonal pair correlation energy and with the accumulated value of these contributions reaching a certain accuracy. The virtual space for a general (non diagonal) pair is spanned by the union of the two OSV sets related to the individual WFs of the pair. In the periodic LMP2 method, the diagonal LMP2 amplitude matrices needed for the construction of the OSVs are calculated in the basis of projected atomic orbitals (PAOs), employing very large PAO domains. It turns out that the OSVs are excellent to describe short range correlation, yet less appropriate for long range van der Waals correlation. In order to compensate for this bias towards short range correlation, we augment the virtual space spanned by the OSVs by the most diffuse PAOs of the corresponding minimal PAO domain. The Fock and overlap matrices in OSV basis are constructed in the reciprocal space. The 4-index electron repulsion integrals are calculated by local density fitting and, for distant pairs, via multipole approximation. New procedures for determining the fit-domains and the distant-pair lists, leading to higher efficiency in the 4-index integral evaluation, have been implemented. Generally, and in contrast to our previous PAO based periodic LMP2 method, the OSV-LMP2 method does not require anymore great care in the specification of the individual domains (to get a balanced description when calculating energy differences) and is in that sense a black box procedure. Discontinuities in potential energy surfaces, which may occur for PAO-based calculations if one is not
Periodic local MP2 method employing orbital specific virtuals.
Usvyat, Denis; Maschio, Lorenzo; Schütz, Martin
2015-09-14
We introduce orbital specific virtuals (OSVs) to represent the truncated pair-specific virtual space in periodic local Møller-Plesset perturbation theory of second order (LMP2). The OSVs are constructed by diagonalization of the LMP2 amplitude matrices which correspond to diagonal Wannier-function (WF) pairs. Only a subset of these OSVs is adopted for the subsequent OSV-LMP2 calculation, namely, those with largest contribution to the diagonal pair correlation energy and with the accumulated value of these contributions reaching a certain accuracy. The virtual space for a general (non diagonal) pair is spanned by the union of the two OSV sets related to the individual WFs of the pair. In the periodic LMP2 method, the diagonal LMP2 amplitude matrices needed for the construction of the OSVs are calculated in the basis of projected atomic orbitals (PAOs), employing very large PAO domains. It turns out that the OSVs are excellent to describe short range correlation, yet less appropriate for long range van der Waals correlation. In order to compensate for this bias towards short range correlation, we augment the virtual space spanned by the OSVs by the most diffuse PAOs of the corresponding minimal PAO domain. The Fock and overlap matrices in OSV basis are constructed in the reciprocal space. The 4-index electron repulsion integrals are calculated by local density fitting and, for distant pairs, via multipole approximation. New procedures for determining the fit-domains and the distant-pair lists, leading to higher efficiency in the 4-index integral evaluation, have been implemented. Generally, and in contrast to our previous PAO based periodic LMP2 method, the OSV-LMP2 method does not require anymore great care in the specification of the individual domains (to get a balanced description when calculating energy differences) and is in that sense a black box procedure. Discontinuities in potential energy surfaces, which may occur for PAO-based calculations if one is not
Periodic local MP2 method employing orbital specific virtuals
Usvyat, Denis Schütz, Martin; Maschio, Lorenzo
2015-09-14
We introduce orbital specific virtuals (OSVs) to represent the truncated pair-specific virtual space in periodic local Møller-Plesset perturbation theory of second order (LMP2). The OSVs are constructed by diagonalization of the LMP2 amplitude matrices which correspond to diagonal Wannier-function (WF) pairs. Only a subset of these OSVs is adopted for the subsequent OSV-LMP2 calculation, namely, those with largest contribution to the diagonal pair correlation energy and with the accumulated value of these contributions reaching a certain accuracy. The virtual space for a general (non diagonal) pair is spanned by the union of the two OSV sets related to the individual WFs of the pair. In the periodic LMP2 method, the diagonal LMP2 amplitude matrices needed for the construction of the OSVs are calculated in the basis of projected atomic orbitals (PAOs), employing very large PAO domains. It turns out that the OSVs are excellent to describe short range correlation, yet less appropriate for long range van der Waals correlation. In order to compensate for this bias towards short range correlation, we augment the virtual space spanned by the OSVs by the most diffuse PAOs of the corresponding minimal PAO domain. The Fock and overlap matrices in OSV basis are constructed in the reciprocal space. The 4-index electron repulsion integrals are calculated by local density fitting and, for distant pairs, via multipole approximation. New procedures for determining the fit-domains and the distant-pair lists, leading to higher efficiency in the 4-index integral evaluation, have been implemented. Generally, and in contrast to our previous PAO based periodic LMP2 method, the OSV-LMP2 method does not require anymore great care in the specification of the individual domains (to get a balanced description when calculating energy differences) and is in that sense a black box procedure. Discontinuities in potential energy surfaces, which may occur for PAO-based calculations if one is not
A generalized inversion method: Simultaneous source localization and environmental inversion
NASA Astrophysics Data System (ADS)
Neilsen, Tracianne B.; Knobles, David P.
2002-05-01
The problem of localizing and tracking a source in the shallow ocean is often complicated by uncertainty in the environmental parameters. Likewise, the estimates of environmental parameters in the shallow ocean obtained by inversion methods can be degraded by incorrect information about the source location. To overcome both these common obstacles-environmental mismatch in matched field processing and incorrect source location in geoacoustic inversions-a generalized inversion scheme is developed that includes both source and environmental parameters as unknowns in the inversion. The new technique called systematic decoupling using rotated coordinates (SDRC) expands the original idea of rotated coordinates [M. D. Collins and L. Fishman, J. Acoust. Soc. Am. 98, 1637-1644 (1995)] by using multiple sets of coherent broadband rotated coordinates, each corresponding to a different set of bounds, to systematically decouple the unknowns in a series of simulated annealing inversions. The results of applying the SDRC inversion method to data from the Area Characterization Test II experiment performed on the New Jersey continental shelf are presented. [Work supported by ONR.
The morphing method as a flexible tool for adaptive local/non-local simulation of static fracture
NASA Astrophysics Data System (ADS)
Azdoud, Yan; Han, Fei; Lubineau, Gilles
2014-09-01
We introduce a framework that adapts local and non-local continuum models to simulate static fracture problems. Non-local models based on the peridynamic theory are promising for the simulation of fracture, as they allow discontinuities in the displacement field. However, they remain computationally expensive. As an alternative, we develop an adaptive coupling technique based on the morphing method to restrict the non-local model adaptively during the evolution of the fracture. The rest of the structure is described by local continuum mechanics. We conduct all simulations in three dimensions, using the relevant discretization scheme in each domain, i.e., the discontinuous Galerkin finite element method in the peridynamic domain and the continuous finite element method in the local continuum mechanics domain.
Gong, Zheng; Tran, Duong D; Ratilal, Purnima
2013-11-01
Approaches for instantaneous passive source localization using a towed horizontal receiver array in a random range-dependent ocean waveguide are examined. They include: (1) Moving array triangulation, (2) array invariant, (3) bearings-only target motion analysis in modified polar coordinates via the extended Kalman filter, and (4) bearings-migration minimum mean-square error. These methods are applied to localize and track a vertical source array deployed in the far-field of a towed horizontal receiver array during the Gulf of Maine 2006 Experiment. The source transmitted intermittent broadband pulses in the 300 to 1200 Hz frequency range. A nonlinear matched-filter kernel designed to replicate the acoustic signal measured by the receiver array is applied to enhance the signal-to-noise ratio. The source localization accuracy is found to be highly dependent on source-receiver geometry and the localization approach. For a relatively stationary source drifting at speeds much slower than the receiver array tow-speed, the mean source position can be estimated by moving array triangulation with less than 3% error near broadside direction. For a moving source, the Kalman filter method gives the best performance with 5.5% error. The array invariant is the best approach for localizing sources within the endfire beam of the receiver array with 7% error. PMID:24180781
Nonlinear optical methods for cellular imaging and localization.
McVey, A; Crain, J
2014-07-01
Of all the ways in which complex materials (including many biological systems) can be explored, imaging is perhaps the most powerful because delivering high information content directly. This is particular relevant in aspects of cellular localization where the physical proximity of molecules is crucial in biochemical processes. A great deal of effort in imaging has been spent on enabling chemically selective imaging so that only specific features are revealed. This is almost always achieved by adding fluorescent chemical labels to specific molecules. Under appropriate illumination conditions only the molecules (via their labels) will be visible. The technique is simple and elegant but does suffer from fundamental limitations: (1) the fluorescent labels may fade when illuminated (a phenomenon called photobleaching) thereby constantly decreasing signal contrast over the course of image acquisition. To combat photobleaching one must reduce observation times or apply unfavourably low excitation levels all of which reduce the information content of images; (2) the fluorescent species may be deactivated by various environmental factors (the general term is fluorescence quenching); (3) the presence of fluorescent labels may introduce unexpected complications or may interfere with processes of interest (4) Some molecules of interest cannot be labelled. In these circumstances we require a fundamentally different strategy. One of the most promising alternative is based on a technique called Coherent Anti-Stokes Raman scattering (CARS). CARS is a fundamentally more complex process than is fluorescence and the experimental procedures and optical systems required to deliver high quality CARS images are intricate. However, the rewards are correspondingly very high: CARS probes the chemically distinct vibrations of the constituent molecules in a complex system and is therefore also chemically selective as are fluorescence-based methods. Moreover,the potentially severe problems of
Li, Shu; Cao, Yan; Le, Jian; Chen, Gui-Liang; Chai, Yi-Feng; Lu, Feng
2009-02-01
The present paper constructs a new approach named local straight-line screening (LSLS) to detect Chinese proprietary medicines (CPM) containing undeclared prescription drugs (UPD). Different from traditional methods used in analysis of multi-component spectrum, LSLS is proposed according to the characteristics of original infrared spectra of the UPD and suspected CPM, without any pattern recognition or concentration model establishment. Spectrum-subtraction leads to the variance in local straight line, which serves as a key in discrimination of whether suspected CPD is adulterated or not. Sibutramine hydrochloride, fenfluramine hydrochloride, sildenafil citrate and lovastatin were used as reference substances of UPD to analyze 16 suspected CPM samples. The results show that LSLS can obtain an accurate quantitative and qualitative analysis of suspected CPM. It is possible for the method to be potentially used in the preliminary screening of CPM containing possible UPD. PMID:19445196
Strategy for the Development of a DNB Local Predictive Approach Based on Neptune CFD Software
Haynes, Pierre-Antoine; Peturaud, Pierre; Montout, Michael; Hervieu, Eric
2006-07-01
The NEPTUNE project constitutes the thermal-hydraulics part of a long-term joint development program for the next generation of nuclear reactor simulation tools. This project is being carried through by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique), with the co-sponsorship of IRSN (Institut de Radioprotection et de Surete Nucleaire) and AREVA NP. NEPTUNE is a multi-phase flow software platform that includes advanced physical models and numerical methods for each simulation scale (CFD, component, system). NEPTUNE also provides new multi-scale and multi-disciplinary coupling functionalities. This new generation of two-phase flow simulation tools aims at meeting major industrial needs. DNB (Departure from Nucleate Boiling) prediction in PWRs is one of the high priority needs, and this paper focuses on its anticipated improvement by means of a so-called 'Local Predictive Approach' using the NEPTUNE CFD code. We firstly present the ambitious 'Local Predictive Approach' anticipated for a better prediction of DNB, i.e. an approach that intends to result in CHF correlations based on relevant local parameters as provided by the CFD modeling. The associated requirements for the two-phase flow modeling are underlined as well as those for the good level of performance of the NEPTUNE CFD code; hence, the code validation strategy based on different experimental data base types (including separated effect and integral-type tests data) is depicted. Secondly, we present comparisons between low pressure adiabatic bubbly flow experimental data obtained on the DEDALE experiment and the associated numerical simulation results. This study anew shows the high potential of NEPTUNE CFD code, even if, with respect to the aforementioned DNB-related aim, there is still a need for some modeling improvements involving new validation data obtained in thermal-hydraulics conditions representative of PWR ones. Finally, we deal with one of these new experimental data needs
Local Table Condensation in Rough Set Approach for Jumping Emerging Pattern Induction
NASA Astrophysics Data System (ADS)
Terlecki, Pawel; Walczak, Krzysztof
This paper extends the rough set approach for JEP induction based on the notion of a condensed decision table. The original transaction database is transformed to a relational form and patterns are induced by means of local reducts. The transformation employs an item aggregation obtained by coloring a graph that re0ects con0icts among items. For e±ciency reasons we propose to perform this preprocessing locally, i.e. at the transaction level, to achieve a higher dimensionality gain. Special maintenance strategy is also used to avoid graph rebuilds. Both global and local approach have been tested and discussed for dense and synthetically generated sparse datasets.
Training NOAA Staff on Effective Communication Methods with Local Climate Users
NASA Astrophysics Data System (ADS)
Timofeyeva, M. M.; Mayes, B.
2011-12-01
Since 2002 NOAA National Weather Service (NWS) Climate Services Division (CSD) offered training opportunities to NWS staff. As a result of eight-year-long development of the training program, NWS offers three training courses and about 25 online distance learning modules covering various climate topics: climate data and observations, climate variability and change, NWS national and local climate products, their tools, skill, and interpretation. Leveraging climate information and expertise available at all NOAA line offices and partners allows delivery of the most advanced knowledge and is a very critical aspect of the training program. NWS challenges in providing local climate services includes effective communication techniques on provide highly technical scientific information to local users. Addressing this challenge requires well trained, climate-literate workforce at local level capable of communicating the NOAA climate products and services as well as provide climate-sensitive decision support. Trained NWS climate service personnel use proactive and reactive approaches and professional education methods in communicating climate variability and change information to local users. Both scientifically-unimpaired messages and amiable communication techniques such as story telling approach are important in developing an engaged dialog between the climate service providers and users. Several pilot projects NWS CSD conducted in the past year applied the NWS climate services training program to training events for NOAA technical user groups. The technical user groups included natural resources managers, engineers, hydrologists, and planners for transportation infrastructure. Training of professional user groups required tailoring the instructions to the potential applications of each group of users. Training technical user identified the following critical issues: (1) Knowledge of target audience expectations, initial knowledge status, and potential use of climate
Virtual local target method for avoiding local minimum in potential field based robot navigation.
Zou, Xi-Yong; Zhu, Jing
2003-01-01
A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation. Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments. PMID:12765277
CoRILISA: a local similarity based receptor dependent QSAR method.
Khedkar, Vijay M; Coutinho, Evans C
2015-01-26
Molecular similarity methods have played a crucial role in the success of structure-based and computer-assisted drug design. However, with the exception of CoMSIA, the current approaches for estimating molecular similarity yield a global picture thereby providing limited information about the local spatial molecular features responsible for the variation of activity with the 3D structure. Application of molecular similarity measures, each related to the functional "pieces" of a ligand-receptor complex, is advantageous over a composite molecular similarity alone and will provide more insights to rationally interpret the activity based on the receptor and ligand structural features. Building on the ideas of our previously published methodologies-CoRIA and LISA, we present here a local molecular similarity based receptor dependent QSAR method termed CoRILISA which is a hybrid of the two approaches. The method improves on previous techniques by inclusion of receptor attributes for the calculation and comparison of similarity between molecules. For validation studies, the CoRILISA methodology was applied on three large and diverse data sets-glycogen phosphorylase b (GPb), human immunodeficiency virus-1 protease (HIV PR), and cyclin dependent kinase 2 (CDK2) inhibitors. The statistics of the CoRILISA models were benchmarked against the standard CoRIA approach and with other published approaches. The CoRILISA models were found to be significantly better, especially in terms of the predictivity for the test set. CoRILISA is able to identify the thermodynamic properties associated with residues that define the active site and modulate the variation in the activity of the molecules. It is a useful tool in the fragment-based drug discovery approach for ligand activity prediction. PMID:25535645
Strategies and Methods: A Variety of Approaches.
ERIC Educational Resources Information Center
Lea, H. Daniel; And Others
This document consists of the second section of a book of readings on issues related to adult career development. The five chapters in this second section focus on strategies and methods for providing adults with career services. "The Adult Career Program Developer as Learner" (H. Daniel Lea and Zandy Leibowitz) presents a five-stage model of…
OCT-based approach to local relaxations discrimination from translational relaxation motions
NASA Astrophysics Data System (ADS)
Matveev, Lev A.; Matveyev, Alexandr L.; Gubarkova, Ekaterina V.; Gelikonov, Grigory V.; Sirotkina, Marina A.; Kiseleva, Elena B.; Gelikonov, Valentin M.; Gladkova, Natalia D.; Vitkin, Alex; Zaitsev, Vladimir Y.
2016-04-01
Multimodal optical coherence tomography (OCT) is an emerging tool for tissue state characterization. Optical coherence elastography (OCE) is an approach to mapping mechanical properties of tissue based on OCT. One of challenging problems in OCE is elimination of the influence of residual local tissue relaxation that complicates obtaining information on elastic properties of the tissue. Alternatively, parameters of local relaxation itself can be used as an additional informative characteristic for distinguishing the tissue in normal and pathological states over the OCT image area. Here we briefly present an OCT-based approach to evaluation of local relaxation processes in the tissue bulk after sudden unloading of its initial pre-compression. For extracting the local relaxation rate we evaluate temporal dependence of local strains that are mapped using our recently developed hybrid phase resolved/displacement-tracking (HPRDT) approach. This approach allows one to subtract the contribution of global displacements of scatterers in OCT scans and separate the temporal evolution of local strains. Using a sample excised from of a coronary arteria, we demonstrate that the observed relaxation of local strains can be reasonably fitted by an exponential law, which opens the possibility to characterize the tissue by a single relaxation time. The estimated local relaxation times are assumed to be related to local biologically-relevant processes inside the tissue, such as diffusion, leaking/draining of the fluids, local folding/unfolding of the fibers, etc. In general, studies of evolution of such features can provide new metrics for biologically-relevant changes in tissue, e.g., in the problems of treatment monitoring.
DNA methods: critical review of innovative approaches.
Kok, Esther J; Aarts, Henk J M; Van Hoef, A M Angeline; Kuiper, Harry A
2002-01-01
The presence of ingredients derived from genetically modified organisms (GMOs) in food products in the market place is subject to a number of European regulations that stipulate which product consisting of or containing GMO-derived ingredients should be labeled as such. In order to maintain these labeling requirements, a variety of different GMO detection methods have been developed to screen for either the presence of DNA or protein derived from (approved) GM varieties. Recent incidents where unapproved GM varieties entered the European market show that more powerful GMO detection and identification methods will be needed to maintain European labeling requirements in an adequate, efficient, and cost-effective way. This report discusses the current state-of-the-art as well as future developments in GMO detection. PMID:12083278
An MRI denoising method using image data redundancy and local SNR estimation.
Golshan, Hosein M; Hasanzadeh, Reza P R; Yousefzadeh, Shahrokh C
2013-09-01
This paper presents an LMMSE-based method for the three-dimensional (3D) denoising of MR images assuming a Rician noise model. Conventionally, the LMMSE method estimates the noise-less signal values using the observed MR data samples within local neighborhoods. This is not an efficient procedure to deal with this issue while the 3D MR data intrinsically includes many similar samples that can be used to improve the estimation results. To overcome this problem, we model MR data as random fields and establish a principled way which is capable of choosing the samples not only from a local neighborhood but also from a large portion of the given data. To follow the similar samples within the MR data, an effective similarity measure based on the local statistical moments of images is presented. The parameters of the proposed filter are automatically chosen from the estimated local signal-to-noise ratio. To further enhance the denoising performance, a recursive version of the introduced approach is also addressed. The proposed filter is compared with related state-of-the-art filters using both synthetic and real MR datasets. The experimental results demonstrate the superior performance of our proposal in removing the noise and preserving the anatomical structures of MR images. PMID:23668996
Microscopic approach to the generator coordinate method
Haider, Q.; Gogny, D.; Weiss, M.S.
1989-08-22
In this paper, we solve different theoretical problems associated with the calculation of the kernel occurring in the Hill-Wheeler integral equations within the framework of generator coordinate method. In particular, we extend the Wick's theorem to nonorthogonal Bogoliubov states. Expressions for the overlap between Bogoliubov states and for the generalized density matrix are also derived. These expressions are valid even when using an incomplete basis, as in the case of actual calculations. Finally, the Hill-Wheeler formalism is developed for a finite range interaction and the Skyrme force, and evaluated for the latter. 20 refs., 1 fig., 4 tabs.
[Spiritual themes in mental pathology. Methodical approach].
Marchais, P; Randrup, A
1994-10-01
The meaning of the themes with spiritual connotations poses complex problems for psychiatry, because these themes induce the observer to project his own convictions and frames of references on his investigations. A double detachment (objectivation) concerning both the object of study and the observer is implied. This makes it possible to study these phenomena by a more rigorous method, to investigate the conditions of their formation and to demonstrate objectifiable correlates (experienced space and time, the various levels of psychic experience, factors in the environment...). In consequence the appropriate medical behaviour can be more precisely delineated. PMID:7818230
Zimmermann, Olav; Hansmann, Ulrich H E
2008-09-01
Constraint generation for 3d structure prediction and structure-based database searches benefit from fine-grained prediction of local structure. In this work, we present LOCUSTRA, a novel scheme for the multiclass prediction of local structure that uses two layers of support vector machines (SVM). Using a 16-letter structural alphabet from de Brevern et al. (Proteins: Struct., Funct., Bioinf. 2000, 41, 271-287), we assess its prediction ability for an independent test set of 222 proteins and compare our method to three-class secondary structure prediction and direct prediction of dihedral angles. The prediction accuracy is Q16=61.0% for the 16 classes of the structural alphabet and Q3=79.2% for a simple mapping to the three secondary classes helix, sheet, and coil. We achieve a mean phi(psi) error of 24.74 degrees (38.35 degrees) and a median RMSDA (root-mean-square deviation of the (dihedral) angles) per protein chain of 52.1 degrees. These results compare favorably with related approaches. The LOCUSTRA web server is freely available to researchers at http://www.fz-juelich.de/nic/cbb/service/service.php. PMID:18763837
A new approach for beam hardening correction based on the local spectrum distributions
NASA Astrophysics Data System (ADS)
Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza
2015-09-01
Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.
The active titration method for measuring local hydroxyl radical concentration
NASA Technical Reports Server (NTRS)
Sprengnether, Michele; Prinn, Ronald G.
1994-01-01
We are developing a method for measuring ambient OH by monitoring its rate of reaction with a chemical species. Our technique involves the local, instantaneous release of a mixture of saturated cyclic hydrocarbons (titrants) and perfluorocarbons (dispersants). These species must not normally be present in ambient air above the part per trillion concentration. We then track the mixture downwind using a real-time portable ECD tracer instrument. We collect air samples in canisters every few minutes for roughly one hour. We then return to the laboratory and analyze our air samples to determine the ratios of the titrant to dispersant concentrations. The trends in these ratios give us the ambient OH concentration from the relation: dlnR/dt = -k(OH). A successful measurement of OH requires that the trends in these ratios be measureable. We must not perturb ambient OH concentrations. The titrant to dispersant ratio must be spatially invariant. Finally, heterogeneous reactions of our titrant and dispersant species must be negligible relative to the titrant reaction with OH. We have conducted laboratory studies of our ability to measure the titrant to dispersant ratios as a function of concentration down to the few part per trillion concentration. We have subsequently used these results in a gaussian puff model to estimate our expected uncertainty in a field measurement of OH. Our results indicate that under a range of atmospheric conditions we expect to be able to measure OH with a sensitivity of 3x10(exp 5) cm(exp -3). In our most optimistic scenarios, we obtain a sensitivity of 1x10(exp 5) cm(exp -3). These sensitivity values reflect our anticipated ability to measure the ratio trends. However, because we are also using a rate constant to obtain our (OH) from this ratio trend, our accuracy cannot be better than that of the rate constant, which we expect to be about 20 percent.
A method for localized computation of Pulse Wave Velocity in carotid structure.
Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram
2015-08-01
Pulse Wave Velocity (PWV) promises to be a useful clinical marker for noninvasive diagnosis of atherosclerosis. This work demonstrates the ability to perform localized carotid PWV measurements from the distention waveform derived from the Radio Frequency (RF) ultrasound signal using a carotid phantom setup. The proposed system consists of low cost custom-built ultrasound probe and algorithms for envelope detection, arterial wall identification, echo tracking, distension waveform computation and PWV estimation. The method is proposed on a phantom data acquired using custom-built prototype non-imaging probe. The proposed approach is non-image based and can be seamlessly integrated into existing clinical ultrasound scanners. PMID:26736653
Fuzzy stochastic elements method. Spectral approach
NASA Astrophysics Data System (ADS)
Sniady, Pawel; Mazur-Sniady, Krystyna; Sieniawska, Roza; Zukowski, Stanislaw
2013-05-01
We study a complex dynamic problem, which concerns a structure with uncertain parameters subjected to a stochastic excitation. Formulation of such a problem introduces fuzzy random variables for parameters of the structure and fuzzy stochastic processes for the load process. The uncertainty has two sources, namely the randomness of structural parameters such as geometry characteristics, material and damping properties, load process and imprecision of the theoretical model and incomplete information or uncertain data. All of these have a great influence on the response of the structure. By analyzing such problems we describe the random variability using the probability theory and the imprecision by use of fuzzy sets. Due to the fact that it is difficult to find an analytic expression for the inversion of the stochastic operator in the stochastic differential equation, a number of approximate methods have been proposed in the literature which can be connected to the finite element method. To evaluate the effects of excitation in the frequency domain we use the spectral density function. The spectral analysis is widely used in stochastic dynamics field of linear systems for stationary random excitation. The concept of the evolutionary spectral density is used in the case of non-stationary random excitation. We solve the considered problem using fuzzy stochastic finite element method. The solution is based on the idea of a fuzzy random frequency response vector for stationary input excitation and a transient fuzzy random frequency response vector for the fuzzy non-stationary one. We use the fuzzy random frequency response vector and the transient fuzzy random frequency response vector in the context of spectral analysis in order to determine the influence of structural uncertainty on the fuzzy random response of the structure. We study a linear system with random parameters subjected to two particular cases of stochastic excitation in a frequency domain. The first one
Green, Carla A.; Duan, Naihua; Gibbons, Robert D.; Hoagwood, Kimberly E.; Palinkas, Lawrence A.; Wisdom, Jennifer P.
2015-01-01
Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings. PMID:24722814
Green technology approach towards herbal extraction method
NASA Astrophysics Data System (ADS)
Mutalib, Tengku Nur Atiqah Tengku Ab; Hamzah, Zainab; Hashim, Othman; Mat, Hishamudin Che
2015-05-01
The aim of present study was to compare maceration method of selected herbs using green and non-green solvents. Water and d-limonene are a type of green solvents while non-green solvents are chloroform and ethanol. The selected herbs were Clinacanthus nutans leaf and stem, Orthosiphon stamineus leaf and stem, Sesbania grandiflora leaf, Pluchea indica leaf, Morinda citrifolia leaf and Citrus hystrix leaf. The extracts were compared with the determination of total phenolic content. Total phenols were analyzed using a spectrophotometric technique, based on Follin-ciocalteau reagent. Gallic acid was used as standard compound and the total phenols were expressed as mg/g gallic acid equivalent (GAE). The most suitable and effective solvent is water which produced highest total phenol contents compared to other solvents. Among the selected herbs, Orthosiphon stamineus leaves contain high total phenols at 9.087mg/g.
NASA Astrophysics Data System (ADS)
Shen, Yanfeng; Cesnik, Carlos E. S.
2016-04-01
This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.
Local Correlation Calculations Using Standard and Renormalized Coupled-Cluster Methods
NASA Astrophysics Data System (ADS)
Piecuch, Piotr; Li, Wei; Gour, Jeffrey
2009-03-01
Local correlation variants of the coupled-cluster (CC) theory with singles and doubles (CCSD) and CC methods with singles, doubles, and non-iterative triples, including CCSD(T) and the completely renormalized CR-CC(2,3) approach, are developed. The main idea of the resulting CIM-CCSD, CIM-CCSD(T), and CIM-CR-CC(2,3) methods is the realization of the fact that the total correlation energy of a large system can be obtained as a sum of contributions from the occupied orthonormal localized molecular orbitals and their respective occupied and unoccupied orbital domains. The CIM-CCSD, CIM-CCSD(T), and CIM-CR-CC(2,3) algorithms are characterized by the linear scaling of the total CPU time with the system size and embarrassing parallelism. By comparing the results of the canonical and CIM-CC calculations for normal alkanes and water clusters, it is demonstrated that the CIM-CCSD, CIM-CCSD(T), and CIM-CR-CC(2,3) approaches recover the corresponding canonical CC correlation energies to within 0.1 % or so, while offering savings in the computer effort by orders of magnitude. By examining the dissociation of dodecane into C11H23 and CH3 and several lowest-energy structures of the (H2O)n clusters, it is shown that the CIM-CC methods accurately reproduce the relative energetics of the corresponding canonical CC calculations.
Slant-hole collimator, dual mode sterotactic localization method
Weisenberger, Andrew G.
2002-01-01
The use of a slant-hole collimator in the gamma camera of dual mode stereotactic localization apparatus allows the acquisition of a stereo pair of scintimammographic images without repositioning of the gamma camera between image acquisitions.
Groundwater abstraction management in Sana'a Basin, Yemen: a local community approach
NASA Astrophysics Data System (ADS)
Taher, Taha M.
2016-07-01
Overexploitation of groundwater resources in Sana'a Basin, Yemen, is causing severe water shortages associated water quality degradation. Groundwater abstraction is five times higher than natural recharge and the water-level decline is about 4-8 m/year. About 90 % of the groundwater resource is used for agricultural activities. The situation is further aggravated by the absence of a proper water-management approach for the Basin. Water scarcity in the Wadi As-Ssirr catchment, the study area, is the most severe and this area has the highest well density (average 6.8 wells/km2) compared with other wadi catchments. A local scheme of groundwater abstraction redistribution is proposed, involving the retirement of a substantial number of wells. The scheme encourages participation of the local community via collective actions to reduce the groundwater overexploitation, and ultimately leads to a locally acceptable, manageable groundwater abstraction pattern. The proposed method suggests using 587 wells rather than 1,359, thus reducing the well density to 2.9 wells/km2. Three scenarios are suggested, involving different reductions to the well yields and/or the number of pumping hours for both dry and wet seasons. The third scenario is selected as a first trial for the communities to action; the resulting predicted reduction, by 2,371,999 m3, is about 6 % of the estimated annual demand. Initially, the groundwater abstraction volume should not be changed significantly until there are protective measures in place, such as improved irrigation efficiency, with the aim of increasing the income of farmers and reducing water use.
A Non-Orthogonal Block-Localized Effective Hamiltonian Approach for Chemical and Enzymatic Reactions
Cembran, Alessandro; Payaka, Apirak; Lin, Yen-lin; Xie, Wangshen; Mo, Yirong; Song, Lingchun; Gao, Jiali
2010-01-01
The effective Hamiltonian-molecular orbital and valence bond (EH-MOVB) method based on non-orthogonal block-localized fragment orbitals has been implemented into the program CHARMM for molecular dynamics simulations of chemical and enzymatic reactions, making use of semiempirical quantum mechanical models. Building upon ab initio MOVB theory, we make use of two parameters in the EH-MOVB method to fit the barrier height and the relative energy between the reactant and product state for a given chemical reaction to be in agreement with experiment or high-level ab initio or density functional results. Consequently, the EH-MOVB method provides a highly accurate and computationally efficient QM/MM model for dynamics simulation of chemical reactions in solution. The EH-MOVB method is illustrated by examination of the potential energy surface of the hydride transfer reaction from trimethylamine to a flavin cofactor model in the gas phase. In the present study, we employed the semiempirical AM1 model, which yields a reaction barrier that is more than 5 kcal/mol too high. We use a parameter calibration procedure for the EH-MOVB method similar to that employed to adjust the results of semiempirical and empirical models. Thus, the relative energy of these two diabatic states can be shifted to reproduce the experimental energy of reaction, and the barrier height is optimized to reproduce the desired (accurate) value by adding a constant to the off-diagonal matrix element. The present EH-MOVB method offers a viable approach to characterizing solvent and protein-reorganization effects in the realm of combined QM/MM simulations. PMID:20694172
Feature weight estimation for gene selection: a local hyperlinear learning approach
2014-01-01
Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071
Establishment of local searching methods for orbitrap-based high throughput metabolomics analysis.
Tang, Haiping; Wang, Xueying; Xu, Lina; Ran, Xiaorong; Li, Xiangjun; Chen, Ligong; Zhao, Xinbin; Deng, Haiteng; Liu, Xiaohui
2016-08-15
Our method aims to establish local endogenous metabolite databases economically without purchasing chemical standards, giving strong bases for following orbitrap based high throughput untargeted metabolomics analysis. A new approach here is introduced to construct metabolite databases on the base of biological sample analysis and mathematic extrapolation. Building local metabolite databases traditionally requires expensive chemical standards, which is barely affordable for most research labs. As a result, most labs working on metabolomics analysis have to refer public libraries, which is time consuming and limited for high throughput analysis. Using this strategy, a high throughput orbitrap based metabolomics platform can be established at almost no cost within a couple of months. It enables to facilitate the application of high throughput metabolomics analysis to identify disease-related biomarkers or investigate biological functions using orbitrap. PMID:27260449
A simple method for GFP- and RFP-based dual color single-molecule localization microscopy.
Platonova, Evgenia; Winterflood, Christian M; Ewers, Helge
2015-06-19
The recent development of single-molecule localization-based super-resolution techniques has afforded a resolution in the nanometer range in light microscopy. The ability to resolve biological structures on this scale by multicolor techniques faces significant challenges which have prevented their widespread use. Here, we provide a generic approach for high-quality simultaneous two-color single-molecule localization microscopy imaging of any combination of GFP- and RFP-tagged proteins with the use of nanobodies. Our method addresses a number of common issues related to two-color experiments, including accuracy and density of labeling as well as chromatic aberration and color-crosstalk with only minimal technical requirements. We demonstrate two-color imaging of various nanoscopic structures and show a compound resolution down to the limit routinely achieved only in a single color. PMID:25806422
NASA Astrophysics Data System (ADS)
Kunc, K.
1983-02-01
It is shown how the variation of lattice dynamical force constants caused by substitutional isoelectronic impurities can be evaluated ab initio. The approach, illustrated on the example of Al in GaAs, is based on local density functional and uses ionic pseudopotentials of Al, Ga, As as the only input; Hellmann-Feynman theorem is applied in order to extract from self-consistent electronic charge densities the forces acting on atoms in periodic patterns in which entire planes of impurities are displaced. The defect-induced variations of inter planar force constants are converted into the inter atomic ones, which can be compared with those determined by phenomenological models from the measured local mode frequencies. A method is presented which allows to account for the effect of relaxation without requiring an explicit determination of the latter. Particular problems resulting from dealing with entire plane of defects are discussed and an estimate for relaxation is given.
NASA Astrophysics Data System (ADS)
Penny, Robert D.; Crowley, Tanya M.; Gardner, Barbara M.; Mandell, Myron J.; Guo, Yanlin; Haas, Eric B.; Knize, Duane J.; Kuharski, Robert A.; Ranta, Dale; Shyffer, Ryan; Labov, Simon; Nelson, Karl; Seilhan, Brandon; Valentine, John D.
2015-06-01
A novel approach and algorithm have been developed to rapidly detect and localize both moving and static radiological/nuclear (R/N) sources from an airborne platform. Current aerial systems with radiological sensors are limited in their ability to compensate for variable naturally occurring radioactive material (NORM) background. The proposed approach suppresses the effects of NORM background by incorporating additional information to segment the survey area into regions over which the background is likely to be uniform. The method produces pixelated Source Activity Maps (SAMs) of both target and background radionuclide activity over the survey area. The task of producing the SAMs requires (1) the development of a forward model which describes the transformation of radionuclide activity to detector measurements and (2) the solution of the associated inverse problem. The inverse problem is ill-posed as there are typically fewer measurements than unknowns. In addition the measurements are subject to Poisson statistical noise. The Maximum-Likelihood Expectation-Maximization (MLEM) algorithm is used to solve the inverse problem as it is well suited for under-determined problems corrupted by Poisson noise. A priori terrain information is incorporated to segment the reconstruction space into regions within which we constrain NORM background activity to be uniform. Descriptions of the algorithm and examples of performance with and without segmentation on simulated data are presented.
Anemone, Robert; Emerson, Charles; Conroy, Glenn
2011-01-01
Chance and serendipity have long played a role in the location of productive fossil localities by vertebrate paleontologists and paleoanthropologists. We offer an alternative approach, informed by methods borrowed from the geographic information sciences and using recent advances in computer science, to more efficiently predict where fossil localities might be found. Our model uses an artificial neural network (ANN) that is trained to recognize the spectral characteristics of known productive localities and other land cover classes, such as forest, wetlands, and scrubland, within a study area based on the analysis of remotely sensed (RS) imagery. Using these spectral signatures, the model then classifies other pixels throughout the study area. The results of the neural network classification can be examined and further manipulated within a geographic information systems (GIS) software package. While we have developed and tested this model on fossil mammal localities in deposits of Paleocene and Eocene age in the Great Divide Basin of southwestern Wyoming, a similar analytical approach can be easily applied to fossil-bearing sedimentary deposits of any age in any part of the world. We suggest that new analytical tools and methods of the geographic sciences, including remote sensing and geographic information systems, are poised to greatly enrich paleoanthropological investigations, and that these new methods should be embraced by field workers in the search for, and geospatial analysis of, fossil primates and hominins. PMID:22034235
Towards Multi-Method Research Approach in Empirical Software Engineering
NASA Astrophysics Data System (ADS)
Mandić, Vladimir; Markkula, Jouni; Oivo, Markku
This paper presents results of a literature analysis on Empirical Research Approaches in Software Engineering (SE). The analysis explores reasons why traditional methods, such as statistical hypothesis testing and experiment replication are weakly utilized in the field of SE. It appears that basic assumptions and preconditions of the traditional methods are contradicting the actual situation in the SE. Furthermore, we have identified main issues that should be considered by the researcher when selecting the research approach. In virtue of reasons for weak utilization of traditional methods we propose stronger use of Multi-Method approach with Pragmatism as the philosophical standpoint.
Rey, Sébastien; Gardy, Jennifer L; Brinkman, Fiona SL
2005-01-01
Background Identification of a bacterial protein's subcellular localization (SCL) is important for genome annotation, function prediction and drug or vaccine target identification. Subcellular fractionation techniques combined with recent proteomics technology permits the identification of large numbers of proteins from distinct bacterial compartments. However, the fractionation of a complex structure like the cell into several subcellular compartments is not a trivial task. Contamination from other compartments may occur, and some proteins may reside in multiple localizations. New computational methods have been reported over the past few years that now permit much more accurate, genome-wide analysis of the SCL of protein sequences deduced from genomes. There is a need to compare such computational methods with laboratory proteomics approaches to identify the most effective current approach for genome-wide localization characterization and annotation. Results In this study, ten subcellular proteome analyses of bacterial compartments were reviewed. PSORTb version 2.0 was used to computationally predict the localization of proteins reported in these publications, and these computational predictions were then compared to the localizations determined by the proteomics study. By using a combined approach, we were able to identify a number of contaminants and proteins with dual localizations, and were able to more accurately identify membrane subproteomes. Our results allowed us to estimate the precision level of laboratory subproteome studies and we show here that, on average, recent high-precision computational methods such as PSORTb now have a lower error rate than laboratory methods. Conclusion We have performed the first focused comparison of genome-wide proteomic and computational methods for subcellular localization identification, and show that computational methods have now attained a level of precision that is exceeding that of high-throughput laboratory
A local pseudo arc-length method for hyperbolic conservation laws
NASA Astrophysics Data System (ADS)
Wang, Xing; Ma, Tian-Bao; Ren, Hui-Lan; Ning, Jian-Guo
2014-12-01
A local pseudo arc-length method (LPALM) for solving hyperbolic conservation laws is presented in this paper. The key idea of this method comes from the original arc-length method, through which the critical points are bypassed by transforming the computational space. The method is based on local changes of physical variables to choose the discontinuous stencil and introduce the pseudo arc-length parameter, and then transform the governing equations from physical space to arc-length space. In order to solve these equations in arc-length coordinate, it is necessary to combine the velocity of mesh points in the moving mesh method, and then convert the physical variable in arclength space back to physical space. Numerical examples have proved the effectiveness and generality of the new approach for linear equation, nonlinear equation and system of equations with discontinuous initial values. Non-oscillation solution can be obtained by adjusting the parameter and the mesh refinement number for problems containing both shock and rarefaction waves.
ERIC Educational Resources Information Center
Smith, Peter K.; Howard, Sharon; Thompson, Fran
2007-01-01
The Support Group Method (SGM), formerly the No Blame Approach, is widely used as an anti-bullying intervention in schools, but has aroused some controversy. There is little evidence from users regarding its effectiveness. We aimed to ascertain the use of and support for the SGM in Local Authorities (LAs) and schools; and obtain ratings of…
Multiple-aperture speckle method applied to local displacement measurements
NASA Astrophysics Data System (ADS)
Ángel, Luciano; Tebaldi, Myrian; Bolognini, Néstor
2007-06-01
The goal of this work is to analyze the measurement capability of the modified speckle photography technique that uses different multiple aperture pupils in a multiple exposure scheme. In particular, the rotation case is considered. A point-wise analysis procedure is utilized to obtain the fringes required to access to the local displacement measurements. The proposed arrangement allows simultaneous displaying in the Fourier plane several fringes system each one associated with different rotations. We experimentally verified that the local displacement measurements can be determined with a high precision and accuracy.
[Method of local treatment of trophic ulcers of venous etiology].
Kukol'nikova, E L; Zhukov, B N
2011-01-01
The study is based on the results of local treatment of trophic ulcers of 150 patients with chronic venous insufficiency of the lower extremities. Local treatment is laser treatment and diagnostic unit with a wavelength λ=0,65 mkm and output power of 30 mW in pulsed mode for 10 minutes 1 times per day for 7-10 days. As an objective criterion for determining the speed and intensity of the healing of trophic ulcers and non-contact fixing their area of applied computer thermography. True healing of ulcers was achieved in all patients during the period from 14 to 28 days. PMID:21983538
Communication: Improved pair approximations in local coupled-cluster methods
Schwilk, Max; Werner, Hans-Joachim; Usvyat, Denis
2015-03-28
In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.
Matrix-product-state method with local basis optimization for nonequilibrium electron-phonon systems
NASA Astrophysics Data System (ADS)
Heidrich-Meisner, Fabian; Brockt, Christoph; Dorfner, Florian; Vidmar, Lev; Jeckelmann, Eric
We present a method for simulating the time evolution of quasi-one-dimensional correlated systems with strongly fluctuating bosonic degrees of freedom (e.g., phonons) using matrix product states. For this purpose we combine the time-evolving block decimation (TEBD) algorithm with a local basis optimization (LBO) approach. We discuss the performance of our approach in comparison to TEBD with a bare boson basis, exact diagonalization, and diagonalization in a limited functional space. TEBD with LBO can reduce the computational cost by orders of magnitude when boson fluctuations are large and thus it allows one to investigate problems that are out of reach of other approaches. First, we test our method on the non-equilibrium dynamics of a Holstein polaron and show that it allows us to study the regime of strong electron-phonon coupling. Second, the method is applied to the scattering of an electronic wave packet off a region with electron-phonon coupling. Our study reveals a rich physics including transient self-trapping and dissipation. Supported by Deutsche Forschungsgemeinschaft (DFG) via FOR 1807.
A novel local-phase method of automatic atlas construction in fetal ultrasound
NASA Astrophysics Data System (ADS)
Fathima, Sana; Rueda, Sylvia; Papageorghiou, Aris; Noble, J. Alison
2011-03-01
In recent years, fetal diagnostics have relied heavily on clinical assessment and biometric analysis of manually acquired ultrasound images. There is a profound need for automated and standardized evaluation tools to characterize fetal growth and development. This work addresses this need through the novel use of feature-based techniques to develop evaluators of fetal brain gestation. The methodology is comprised of an automated database-driven 2D/3D image atlas construction method, which includes several iterative processes. A unique database was designed to store fetal image data acquired as part of the Intergrowth-21st study. This database drives the proposed automated atlas construction methodology using local phase information to perform affine registration with normalized mutual information as the similarity parameter, followed by wavelet-based image fusion and averaging. The unique feature-based application of local phase and wavelet fusion towards creating the atlas reduces the intensity dependence and difficulties in registering ultrasound images. The method is evaluated on fetal transthalamic head ultrasound images of 20 weeks gestation. The results show that the proposed method is more robust to intensity variations than standard intensity-based methods. Results also suggest that the feature-based approach improves the registration accuracy needed in creating a clinically valid ultrasound image atlas.
SuBSENSE: a universal change detection method with local adaptive sensitivity.
St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert
2015-01-01
Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online. PMID:25494507
Damage localization in a residential-sized wind turbine blade by use of the SDDLV method
NASA Astrophysics Data System (ADS)
Johansen, R. J.; Hansen, L. M.; Ulriksen, M. D.; Tcherniak, D.; Damkilde, L.
2015-07-01
The stochastic dynamic damage location vector (SDDLV) method has previously proved to facilitate effective damage localization in truss- and plate-like structures. The method is based on interrogating damage-induced changes in transfer function matrices in cases where these matrices cannot be derived explicitly due to unknown input. Instead, vectors from the kernel of the transfer function matrix change are utilized; vectors which are derived on the basis of the system and state-to-output mapping matrices from output-only state-space realizations. The idea is then to convert the kernel vectors associated with the lowest singular values into static pseudo-loads and apply these alternately to an undamaged reference model with known stiffness matrix. By doing so, the stresses in the potentially damaged elements will, theoretically, approach zero. The present paper demonstrates an application of the SDDLV method for localization of structural damages in a cantilevered residential-sized wind turbine blade. The blade was excited by an unmeasured multi-impulse load and the resulting dynamic response was captured through accelerometers mounted along the blade. The static pseudo-loads were applied to a finite element (FE) blade model, which was tuned against the modal parameters of the actual blade. In the experiments, an undamaged blade configuration was analysed along with different damage scenarios, hereby testing the applicability of the SDDLV method.
NASA Astrophysics Data System (ADS)
Quinn, Paul; ODonnell, Greg; Owen, Gareth
2014-05-01
This poster presents a case study that highlights two crucial aspects of a catchment-based flood management project that were used to encourage uptake of an effective flood management strategy. Specifically, (1) the role of detailed local scale observations and (2) a modelling method informed by these observations. Within a 6km2 study catchment, Belford UK, a number of Runoff Attenuation Features (RAFs) have been constructed (including ponds, wetlands and woody debris structures) to address flooding issues in the downstream village. The storage capacity of the RAFs is typically small (200 to 500m3), hence there was skepticism as to whether they would work during large flood events. Monitoring was performed using a dense network of water level recorders installed both within the RAFs and within the stream network. Using adjacent upstream and downstream water levels in the stream network and observations within the actual ponds, a detailed understanding of the local performance of the RAFs was gained. However, despite understanding the local impacts of the features, the impact on the downstream hydrograph at the catchment scale could still not be ascertained with any certainty. The local observations revealed that the RAFs typically filled on the rising limb of the hydrograph; hence there was no available storage at the time of arrival of a large flow peak. However, it was also clear that an impact on the rising limb of the hydrograph was being observed. This knowledge of the functioning of individual features was used to create a catchment model, in which a network of RAFs could then be configured to examine the aggregated impacts. This Pond Network Model (PNM) was based on the observed local physical relationships and allowed a user specified sequence of ponds to be configured into a cascade structure. It was found that there was a minimum number of RAFs needed before an impact on peak flow was achieved for a large flood event. The number of RAFs required in the
New approach to dynamical Monte Carlo methods: application to an epidemic model
NASA Astrophysics Data System (ADS)
Aiello, O. E.; da Silva, M. A. A.
2003-09-01
In this work we introduce a new approach to dynamical Monte Carlo methods to simulate Markovian processes. We apply this approach to formulate and study an epidemic generalized SIRS model. The results are in excellent agreement with the forth order Runge-Kutta Method in a region of deterministic solution. We also show that purely local interactions reproduce a poissonian-like process at mesoscopic level. The simulations for this case are checked self-consistently using a stochastic version of the Euler Method.
Lin, Tzu-Hsuan; Lu, Yung-Chi; Hung, Shih-Lin
2014-01-01
This study developed an integrated global-local approach for locating damage on building structures. A damage detection approach with a novel embedded frequency response function damage index (NEFDI) was proposed and embedded in the Imote2.NET-based wireless structural health monitoring (SHM) system to locate global damage. Local damage is then identified using an electromechanical impedance- (EMI-) based damage detection method. The electromechanical impedance was measured using a single-chip impedance measurement device which has the advantages of small size, low cost, and portability. The feasibility of the proposed damage detection scheme was studied with reference to a numerical example of a six-storey shear plane frame structure and a small-scale experimental steel frame. Numerical and experimental analysis using the integrated global-local SHM approach reveals that, after NEFDI indicates the approximate location of a damaged area, the EMI-based damage detection approach can then identify the detailed damage location in the structure of the building. PMID:24672359
Hung, Shih-Lin
2014-01-01
This study developed an integrated global-local approach for locating damage on building structures. A damage detection approach with a novel embedded frequency response function damage index (NEFDI) was proposed and embedded in the Imote2.NET-based wireless structural health monitoring (SHM) system to locate global damage. Local damage is then identified using an electromechanical impedance- (EMI-) based damage detection method. The electromechanical impedance was measured using a single-chip impedance measurement device which has the advantages of small size, low cost, and portability. The feasibility of the proposed damage detection scheme was studied with reference to a numerical example of a six-storey shear plane frame structure and a small-scale experimental steel frame. Numerical and experimental analysis using the integrated global-local SHM approach reveals that, after NEFDI indicates the approximate location of a damaged area, the EMI-based damage detection approach can then identify the detailed damage location in the structure of the building. PMID:24672359
NASA Astrophysics Data System (ADS)
Guthiga, Paul M.; Mburu, John; Holm-Mueller, Karin
2008-05-01
Satisfaction of communities living close to forests with forest management authorities is essential for ensuring continued support for conservation efforts. However, more often than not, community satisfaction is not systematically elicited, analyzed, and incorporated in conservation decisions. This study attempts to elicit levels of community satisfaction with three management approaches of Kakamega forest in Kenya and analyze factors influencing them. Three distinct management approaches are applied by three different authorities: an incentive-based approach of the Forest Department (FD), a protectionist approach of the Kenya Wildlife Service (KWS), and a quasi-private incentive-based approach of Quakers Church Mission (QCM). Data was obtained from a random sample of about 360 households living within a 10-km radius around the forest margin. The protectionist approach was ranked highest overall for its performance in forest management. Results indicate that households are influenced by different factors in their ranking of management approaches. Educated households and those located far from market centers are likely to be dissatisfied with all the three management approaches. The location of the households from the forest margin influences negatively the satisfaction with the protectionist approach, whereas land size, a proxy for durable assets, has a similar effect on the private incentive based approach of the QCM. In conclusion, this article indicates a number of policy implications that can enable the different authorities and their management approaches to gain approval of the local communities.
Simplified approaches to some nonoverlapping domain decomposition methods
Xu, Jinchao
1996-12-31
An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.
NASA Astrophysics Data System (ADS)
Song, Kechen; Yan, Yunhui
2013-11-01
Automatic recognition method for hot-rolled steel strip surface defects is important to the steel surface inspection system. In order to improve the recognition rate, a new, simple, yet robust feature descriptor against noise named the adjacent evaluation completed local binary patterns (AECLBPs) is proposed for defect recognition. In the proposed approach, an adjacent evaluation window which is around the neighbor is constructed to modify the threshold scheme of the completed local binary pattern (CLBP). Experimental results demonstrate that the proposed approach presents the performance of defect recognition under the influence of the feature variations of the intra-class changes, the illumination and grayscale changes. Even in the toughest situation with additive Gaussian noise, the AECLBP can still achieve the moderate recognition accuracy. In addition, the strategy of using adjacent evaluation window can also be used in other methods of local binary pattern (LBP) variants.
Grid-Search Location Methods for Ground-Truth Collection From Local and Regional Seismic Networks
William Rodi; Craig A. Schultz; Gardar Johannesson; Stephen C. Myers
2005-05-13
This project investigated new techniques for improving seismic event locations derived from regional and local networks. The technqiues include a new approach to empirical travel-time calibration that simultaneously fits data from multiple stations and events, using a generalization of the kriging method, and predicts travel-time corrections for arbitrary event-station paths. We combined this calibration approach with grid-search event location to produce a prototype new multiple-event location method that allows the use of spatially well-distributed events and takes into account correlations between the travel-time corrections from proximate event-station paths. Preliminary tests with a high quality data set from Nevada Test Site explosions indicated that our new calibration/location method offers improvement over the conventional multiple-event location methods now in common use, and is applicable to more general event-station geometries than the conventional methods. The tests were limited, however, and further research is needed to fully evaluate, and improve, the approach. Our project also demonstrated the importance of using a realistic model for observational errors in an event location procedure. We took the initial steps in developing a new error model based on mixture-of-Gaussians probability distributions, which possess the properties necessary to characterize the complex arrival time error processes that can occur when picking low signal-to-noise arrivals. We investigated various inference methods for fitting these distributions to observed travel-time residuals, including a Markov Chain Monte Carlo technique for computing Bayesian estimates of the distribution parameters.
Method of center localization for objects containing concentric arcs
NASA Astrophysics Data System (ADS)
Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.
2015-02-01
This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.
New methods for large scale local and global optimization
NASA Astrophysics Data System (ADS)
Byrd, Richard; Schnabel, Robert
1994-07-01
We have pursued all three topics described in the proposal during this research period. A large amount of effort has gone into the development of large scale global optimization methods for molecular configuration problems. We have developed new general purpose methods that combine efficient stochastic global optimization techniques with several new, more deterministic techniques that account for most of the computational effort, and the success, of the methods. We have applied our methods to Lennard-Jones problems with up to 75 atoms, to water clusters with up to 31, molecules, and polymers with up to 58 amino acids. The results appear to be the best so far by general purpose optimization methods, and appear to be leading to some interesting chemistry issues. Our research on the second topic, tensor methods, has addressed several areas. We have designed and implemented tensor methods for large sparse systems of nonlinear equations and nonlinear least squares, and have obtained excellent test results on a wide range of problems. We have also developed new tensor methods for nonlinearly constrained optimization problem, and have obtained promising theoretical and preliminary computational results. Finally, on the third topic, limited memory methods for large scale optimization, we have developed and implemented new, extremely efficient limited memory methods for bound constrained problems, and new limited memory trust regions methods, both using our-recently developed compact representations for quasi-Newton matrices. Computational test results for both methods are promising.
ERIC Educational Resources Information Center
MacDougall, A. F.; And Others
1990-01-01
Discussion of operational effectiveness in libraries focuses on a modeling approach that was used to compare the effectiveness of a local interlibrary loan system with using a national system, the British Library Document Supply Centre (BLDSC). Cost figures and surveys of five academic libraries are described. (six references) (LRW)
A Discourse Based Approach to the Language Documentation of Local Ecological Knowledge
ERIC Educational Resources Information Center
Odango, Emerson Lopez
2016-01-01
This paper proposes a discourse-based approach to the language documentation of local ecological knowledge (LEK). The knowledge, skills, beliefs, cultural worldviews, and ideologies that shape the way a community interacts with its environment can be examined through the discourse in which LEK emerges. 'Discourse-based' refers to two components:…
NASA Astrophysics Data System (ADS)
Xiao, C. W.; Ozpineci, A.; Oset, E.
2015-10-01
Using a coupled channel unitary approach, combining the heavy quark spin symmetry and the dynamics of the local hidden gauge, we investigate the meson-meson interaction with hidden beauty. We obtain several new states of isospin I = 0: six bound states, and weakly bound six more possible states which depend on the influence of the coupled channel effects.
Green Function Approach to the Calculation of the Local Density of States in the Graphitic Nanocone
NASA Astrophysics Data System (ADS)
Smotlacha, Jan; Pinčák, Richard
2016-02-01
Graphene and other nanostructures belong to the center of interest of today's physics research. The local density of states of the graphitic nanocone influenced by the spin-orbit interaction was calculated. Numerical calculations and the Green function approach were used to solve this problem. It was proven in the second case that the second order approximation is not sufficient for this purpose.
International Students' Motivation and Learning Approach: A Comparison with Local Students
ERIC Educational Resources Information Center
Chue, Kah Loong; Nie, Youyan
2016-01-01
Psychological factors contribute to motivation and learning for international students as much as teaching strategies. 254 international students and 144 local students enrolled in a private education institute were surveyed regarding their perception of psychological needs support, their motivation and learning approach. The results from this…
Camilli, R; Bingham, B; Reddy, C M; Nelson, R K; Duryea, A N
2009-10-01
Locating areas of seafloor contamination caused by heavy oil spills is challenging, in large part because of observational limitations in aquatic subsurface environments. Accepted methods for surveying and locating sunken oil are generally slow, labor intensive and spatially imprecise. This paper describes a method to locate seafloor contamination caused by heavy oil fractions using in situ mass spectrometry and concurrent acoustic navigation. We present results of laboratory sensitivity tests and proof-of-concept evaluations conducted at the US Coast Guard OHMSETT national oil spill response test facility. Preliminary results from a robotic seafloor contamination survey conducted in deep water using the mass spectrometer and a geo-referenced acoustic navigation system are also described. Results indicate that this technological approach can accurately localize seafloor oil contamination in real-time at spatial resolutions better than a decimeter. PMID:19540535
A Novel Microaneurysms Detection Method Based on Local Applying of Markov Random Field.
Ganjee, Razieh; Azmi, Reza; Moghadam, Mohsen Ebrahimi
2016-03-01
Diabetic Retinopathy (DR) is one of the most common complications of long-term diabetes. It is a progressive disease and by damaging retina, it finally results in blindness of patients. Since Microaneurysms (MAs) appear as a first sign of DR in retina, early detection of this lesion is an essential step in automatic detection of DR. In this paper, a new MAs detection method is presented. The proposed approach consists of two main steps. In the first step, the MA candidates are detected based on local applying of Markov random field model (MRF). In the second step, these candidate regions are categorized to identify the correct MAs using 23 features based on shape, intensity and Gaussian distribution of MAs intensity. The proposed method is evaluated on DIARETDB1 which is a standard and publicly available database in this field. Evaluation of the proposed method on this database resulted in the average sensitivity of 0.82 for a confidence level of 75 as a ground truth. The results show that our method is able to detect the low contrast MAs with the background while its performance is still comparable to other state of the art approaches. PMID:26779642
Water-sanitation-hygiene mapping: an improved approach for data collection at local level.
Giné-Garriga, Ricard; de Palencia, Alejandro Jiménez-Fernández; Pérez-Foguet, Agustí
2013-10-01
Strategic planning and appropriate development and management of water and sanitation services are strongly supported by accurate and accessible data. If adequately exploited, these data might assist water managers with performance monitoring, benchmarking comparisons, policy progress evaluation, resources allocation, and decision making. A variety of tools and techniques are in place to collect such information. However, some methodological weaknesses arise when developing an instrument for routine data collection, particularly at local level: i) comparability problems due to heterogeneity of indicators, ii) poor reliability of collected data, iii) inadequate combination of different information sources, and iv) statistical validity of produced estimates when disaggregated into small geographic subareas. This study proposes an improved approach for water, sanitation and hygiene (WASH) data collection at decentralised level in low income settings, as an attempt to overcome previous shortcomings. The ultimate aim is to provide local policymakers with strong evidences to inform their planning decisions. The survey design takes the Water Point Mapping (WPM) as a starting point to record all available water sources at a particular location. This information is then linked to data produced by a household survey. Different survey instruments are implemented to collect reliable data by employing a variety of techniques, such as structured questionnaires, direct observation and water quality testing. The collected data is finally validated through simple statistical analysis, which in turn produces valuable outputs that might feed into the decision-making process. In order to demonstrate the applicability of the method, outcomes produced from three different case studies (Homa Bay District-Kenya-; Kibondo District-Tanzania-; and Municipality of Manhiça-Mozambique-) are presented. PMID:23850660
IMMUNOTOXICOLOGICAL INVESTIGATIONS IN THE MOUSE: GENERAL APPROACH AND METHODS
The adverse effects of chemicals on the lymphoreticular system have generated considerable toxicological interest. In the series of papers, the effects of selected environmentally relevant compounds are reported. The first paper describes the methods and general approach used in ...
NASA Astrophysics Data System (ADS)
Wang, Huaguo; Chen, Xiaosong; Jawitz, James W.
2008-11-01
Five locally-calibrated light transmission visualization (LTV) methods were tested to quantify nonaqueous phase liquid (NAPL) mass and mass reduction in porous media. Tetrachloroethylene (PCE) was released into a two-dimensional laboratory flow chamber packed with water-saturated sand which was then flushed with a surfactant solution (2% Tween 80) until all of the PCE had been dissolved. In all the LTV methods employed here, the water phase was dyed, rather than the more common approach of dyeing the NAPL phase, such that the light adsorption characteristics of NAPL did not change as dissolution progressed. Also, none of the methods used here required the use of external calibration chambers. The five visualization approaches evaluated included three methods developed from previously published models, a binary method, and a novel multiple wavelength method that has the advantage of not requiring any assumptions about the intra-pore interface structure between the various phases (sand/water/NAPL). The new multiple wavelength method is also expected to be applicable to any translucent porous media containing two immiscible fluids (e.g., water-air, water-NAPL). Results from the sand-water-PCE system evaluated here showed that the model that assumes wetting media of uniform pore size (Model C of Niemet and Selker, 2001) and the multiple wavelength model with no interface structure assumptions were able to accurately quantify PCE mass reduction during surfactant flushing. The average mass recoveries from these two imaging methods were greater than 95% for domain-average NAPL saturations of approximately 2.6 × 10 - 2 , and were approximately 90% during seven cycles of surfactant flushing that sequentially reduced the average NAPL saturation to 7.5 × 10 - 4 .
NASA Astrophysics Data System (ADS)
Brockt, C.; Dorfner, F.; Vidmar, L.; Heidrich-Meisner, F.; Jeckelmann, E.
2015-12-01
We present a method for simulating the time evolution of one-dimensional correlated electron-phonon systems which combines the time-evolving block decimation algorithm with a dynamical optimization of the local basis. This approach can reduce the computational cost by orders of magnitude when boson fluctuations are large. The method is demonstrated on the nonequilibrium Holstein polaron by comparison with exact simulations in a limited functional space and on the scattering of an electronic wave packet by local phonon modes. Our study of the scattering problem reveals a rich physics including transient self-trapping and dissipation.
An integrated lean-methods approach to hospital facilities redesign.
Nicholas, John
2012-01-01
Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach. PMID:22671435
Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration.
Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu
2016-01-01
One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20-200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity. PMID:27271636
Real Space DFT by Locally Optimal Block Preconditioned Conjugate Gradient Method
NASA Astrophysics Data System (ADS)
Michaud, Vincent; Guo, Hong
2012-02-01
Real space approaches solve the Kohn-Sham (KS) DFT problem as a system of partial differential equations (PDE) in real space numerical grids. In such techniques, the Hamiltonian matrix is typically much larger but sparser than the matrix arising in state-of-the-art DFT codes which are often based on directly minimizing the total energy functional. Evidence of good performance of real space methods - by Chebyshev filtered subspace iteration (CFSI) - was reported by Zhou, Saad, Tiago and Chelikowsky [1]. We found that the performance of the locally optimal block preconditioned conjugate gradient method (LOGPCG) introduced by Knyazev [2], when used in conjunction with CFSI, generally exceeds that of CFSI for solving the KS equations. We will present our implementation of the LOGPCG based real space electronic structure calculator. [4pt] [1] Y. Zhou, Y. Saad, M. L. Tiago, and J. R. Chelikowsky, ``Self-consistent-field calculations using Chebyshev-filtered subspace iteration,'' J. Comput. Phys., vol. 219,pp. 172-184, November 2006. [0pt] [2] A. V. Knyazev, ``Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method,'' SIAM J. Sci. Comput, vol. 23, pp. 517-541, 2001.
Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration
Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu
2016-01-01
One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity. PMID:27271636
NASA Astrophysics Data System (ADS)
Schnitzer, Ory
2015-12-01
Metallic nanostructures characterized by multiple geometric length scales support low-frequency surface-plasmon modes, which enable strong light localization and field enhancement. We suggest studying such configurations using singular perturbation methods and demonstrate the efficacy of this approach by considering, in the quasistatic limit, a pair of nearly touching metallic nanospheres subjected to an incident electromagnetic wave polarized with the electric field along the line of sphere centers. Rather than attempting an exact analytical solution, we construct the pertinent (longitudinal) eigenmodes by matching relatively simple asymptotic expansions valid in overlapping spatial domains. We thereby arrive at an effective boundary eigenvalue problem in a half space representing the metal region in the vicinity of the gap. Coupling with the gap field gives rise to a mixed-type boundary condition with varying coefficients, whereas coupling with the particle-scale field enters through an integral eigenvalue selection rule involving the electrostatic capacitance of the configuration. By solving the reduced problem we obtain accurate closed-form expressions for the resonance values of the metal dielectric function. Furthermore, together with an energy like integral relation, the latter eigensolutions yield also closed-form approximations for the induced dipole moment and gap-field enhancement under resonance. We demonstrate agreement between the asymptotic formulas and a seminumerical computation. The analysis, underpinned by asymptotic scaling arguments, elucidates how metal polarization together with geometrical confinement enables a strong plasmon-frequency redshift and amplified near field at resonance.
Method of preliminary localization of the iris in biometric access control systems
NASA Astrophysics Data System (ADS)
Minacova, N.; Petrov, I.
2015-10-01
This paper presents a method of preliminary localization of the iris, based on the stable brightness features of the iris in images of the eye. In tests on images of eyes from publicly available databases method showed good accuracy and speed compared to existing methods preliminary localization.
NASA Astrophysics Data System (ADS)
Eichstädt, S.; Schmähling, F.; Wübbeler, G.; Anhalt, K.; Bünger, L.; Krüger, U.; Elster, C.
2013-04-01
Bandpass correction in spectrometer measurements using monochromators is often necessary in order to obtain accurate measurement results. The classical approach of spectrometer bandpass correction is based on local polynomial approximations and the use of finite differences. Here we compare this approach with an extension of the Richardson-Lucy method, which is well known in image processing, but has not been applied to spectrum bandpass correction yet. Using an extensive simulation study and a practical example, we demonstrate the potential of the Richardson-Lucy method. In contrast to the classical approach, it is robust with respect to wavelength step size and measurement noise. In almost all cases the Richardson-Lucy method turns out to be superior to the classical approach both in terms of spectrum estimate and its associated uncertainties.
ERIC Educational Resources Information Center
Penhoat, Loick; Sakow, Kostia
1978-01-01
A description of the development and implementation of a method introduced in the Sudan that attempts to relate to Sudanese culture and to motivate students. The relationship between language teaching methods and the total educational system is discussed. (AMH)
Practical approaches for assessing local land use change and conservation priorities in the tropics
NASA Astrophysics Data System (ADS)
Rivas, Cassandra J.
Tropical areas typically support high biological diversity; however, many are experiencing rapid land-use change. The resulting loss, fragmentation, and degradation of habitats place biodiversity at risk. For these reasons, the tropics are frequently identified as global conservation hotspots. Safeguarding tropical biodiversity necessitates successful and efficient conservation planning and implementation at local scales, where land use decisions are made and enforced. Yet, despite considerable agreement on the need for improved practices, planning may be difficult due to limited resources, such as funding, data, and expertise, especially for small conservation organizations in tropical developing countries. My thesis aims to assist small, non-governmental organizations (NGOs), operating in tropical developing countries, in overcoming resource limitations by providing recommendations for improved conservation planning. Following a brief introduction in Chapter 1, I present a literature review of systematic conservation planning (SCP) projects in the developing tropics. Although SCP is considered an efficient, effective approach, it requires substantial data and expertise to conduct the analysis and may present challenges for implementation. I reviewed and synthesized the methods and results of 14 case studies to identify practical ways to implement and overcome limitations for employing SCP. I found that SCP studies in the peer-reviewed literature were primarily implemented by researchers in large organizations or institutions, as opposed to on-the-ground conservation planners. A variety of data types were used in the SCP analyses, many of which data are freely available. Few case studies involved stakeholders and intended to implement the assessment; instead, the case studies were carried out in the context of research and development, limiting local involvement and implementation. Nonetheless, the studies provided valuable strategies for employing each step of
Total System Performance Assessment - License Application Methods and Approach
J. McNeish
2003-12-08
''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document.
Total System Performance Assessment-License Application Methods and Approach
J. McNeish
2002-09-13
''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.
Local Correlation Calculations Using Standard and Renormalized Coupled-Cluster Methods
NASA Astrophysics Data System (ADS)
Li, Wei; Piecuch, Piotr; Gour, Jeffrey R.
2009-03-01
This article discusses our recent effort toward the extension of the linear scaling local correlation approach, termed 'cluster-in-molecule' and abbreviated as CIM [S. Li, J. Ma, and Y. Jiang, J. Comput. Chem. 23, 237 (2002); S. Li, J. Shen, W. Li, and Y. Jiang, J. Chem. Phys. 125, 074109 (2006)], to the coupled-cluster (CC) theory with singles and doubles (CCSD) and CC methods with singles, doubles, and non-iterative triples, including the standard CCSD(T) approach and the completely renormalized CR-CC(2,3) scheme [P. Piecuch and M. Włoch, J. Chem. Phys. 123, 224105 (2005); P. Piecuch, M. Włoch, J. R. Gour, and A. Kinal, Chem. Phys. Lett. 418, 467 (2006)]. As in the earlier CIM work that dealt with the second-order many-body perturbation theory and CC doubles approach, the main idea of the CIM-CCSD, CIM-CCSD(T), and CIM-CR-CC(2,3) methods is the realization of the fact that the total correlation energy of a large system can be obtained as a sum of contributions from the occupied orthonormal localized molecular orbitals and their respective occupied and unoccupied orbital domains. The CIM-CCSD, CIM-CCSD(T), and CIM-CR-CC(2,3) methods pursued in this work are characterized by high computational efficiency in both the CIM and CC parts, enabling calculations for much larger systems than previously possible. This is achieved by combining the natural linear scaling and embarrassing parallelism of the CIM ansatz with the vectorized CC codes that rely on recursively generated intermediates and fast matrix multiplication routines. By comparing the results of the canonical and CIM-CC calculations for normal alkanes and water clusters, it is demonstrated that the CIM-CCSD, CIM-CCSD(T), and CIM-CR-CC(2,3) approaches recover the corresponding canonical CC correlation energies to within 0.1% or so, while offering linear scaling of the computer costs with the system size and savings in the computer effort by orders of magnitude. By examining the dissociation of dodecane into C
Grid-Search Location Methods for Ground-Truth Collection from Local and Regional Seismic Networks
Schultz, C A; Rodi, W; Myers, S C
2003-07-24
The objective of this project is to develop improved seismic event location techniques that can be used to generate more and better quality reference events using data from local and regional seismic networks. Their approach is to extend existing methods of multiple-event location with more general models of the errors affecting seismic arrival time data, including picking errors and errors in model-based travel-times (path corrections). Toward this end, they are integrating a grid-search based algorithm for multiple-event location (GMEL) with a new parameterization of travel-time corrections and new kriging method for estimating the correction parameters from observed travel-time residuals. Like several other multiple-event location algorithms, GMEL currently assumes event-independent path corrections and is thus restricted to small event clusters. The new parameterization assumes that travel-time corrections are a function of both the event and station location, and builds in source-receiver reciprocity and correlation between the corrections from proximate paths as constraints. The new kriging method simultaneously interpolates travel-time residuals from multiple stations and events to estimate the correction parameters as functions of position. They are currently developing the algorithmic extensions to GMEL needed to combine the new parameterization and kriging method with the simultaneous location of events. The result will be a multiple-event location method which is applicable to non-clustered, spatially well-distributed events. They are applying the existing components of the new multiple-event location method to a data set of regional and local arrival times from Nevada Test Site (NTS) explosions with known origin parameters. Preliminary results show the feasibility and potential benefits of combining the location and kriging techniques. They also show some preliminary work on generalizing of the error model used in GMEL with the use of mixture
Nagesh, Jayashree; Brumer, Paul; Izmaylov, Artur F.
2015-02-28
The localized operator partitioning method [Y. Khan and P. Brumer, J. Chem. Phys. 137, 194112 (2012)] rigorously defines the electronic energy on any subsystem within a molecule and gives a precise meaning to the subsystem ground and excited electronic energies, which is crucial for investigating electronic energy transfer from first principles. However, an efficient implementation of this approach has been hindered by complicated one- and two-electron integrals arising in its formulation. Using a resolution of the identity in the definition of partitioning, we reformulate the method in a computationally efficient manner that involves standard one- and two-electron integrals. We apply the developed algorithm to the 9 − ((1 − naphthyl) − methyl) − anthracene (A1N) molecule by partitioning A1N into anthracenyl and CH{sub 2} − naphthyl groups as subsystems and examine their electronic energies and populations for several excited states using configuration interaction singles method. The implemented approach shows a wide variety of different behaviors amongst the excited electronic states.
Shekhar, S; Cambi, A; Figdor, C G; Subramaniam, V; Kanger, J S
2012-08-01
Because both the chemical and mechanical properties of living cells play crucial functional roles, there is a strong need for biophysical methods to address these properties simultaneously. Here we present a novel (to our knowledge) approach to measure local intracellular micromechanical and chemical properties using a hybrid magnetic chemical biosensor. We coupled a fluorescent dye, which serves as a chemical sensor, to a magnetic particle that is used for measurement of the viscoelastic environment by studying the response of the particle to magnetic force pulses. As a demonstration of the potential of this approach, we applied the method to study the process of phagocytosis, wherein cytoskeletal reorganization occurs in parallel with acidification of the phagosome. During this process, we measured the shear modulus and viscosity of the phagosomal environment concurrently with the phagosomal pH. We found that it is possible to manipulate phagocytosis by stalling the centripetal movement of the phagosome using magnetic force. Our results suggest that preventing centripetal phagosomal transport delays the onset of acidification. To our knowledge, this is the first report of manipulation of intracellular phagosomal transport without interfering with the underlying motor proteins or cytoskeletal network through biochemical methods. PMID:22947855
Localized surface plasmon resonance mercury detection system and methods
James, Jay; Lucas, Donald; Crosby, Jeffrey Scott; Koshland, Catherine P.
2016-03-22
A mercury detection system that includes a flow cell having a mercury sensor, a light source and a light detector is provided. The mercury sensor includes a transparent substrate and a submonolayer of mercury absorbing nanoparticles, e.g., gold nanoparticles, on a surface of the substrate. Methods of determining whether mercury is present in a sample using the mercury sensors are also provided. The subject mercury detection systems and methods find use in a variety of different applications, including mercury detecting applications.
Joint motion model for local stereo video-matching method
NASA Astrophysics Data System (ADS)
Zhang, Jinglin; Bai, Cong; Nezan, Jean-Francois; Cousin, Jean-Gabriel
2015-12-01
As one branch of stereo matching, video stereo matching becomes more and more significant in computer vision applications. The conventional stereo matching methods for static images would cause flicker-frames and worse matching results. We propose a joint motion-based square step (JMSS) method for stereo video matching. The motion vector is introduced as one component in the support region building for the raw cost aggregation. Then we aggregate the raw cost along two directions in the support region. Finally, the winner-take-all strategy determines the best disparity under our hypothesis. Experimental results show that the JMSS method not only outperforms other state-of-the-art stereo matching methods on test sequences with abundant movements, but also performs well in some real-world scenes with fixed and moving stereo cameras, respectively, in particular under some extreme conditions of real stereo visions. Additionally, the proposed JMSS method can be implemented in real time, which is superior to other state-of-the-art methods. The time efficiency is also a very important consideration in our algorithm design.
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-01-01
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-01-01
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is
Keasar, Chen; Levitt, Michael
2009-01-01
We suggest a new approach to the generation of candidate structures (decoys) for ab initio prediction of protein structures. Our method is based on random sampling of conformation space and subsequent local energy minimization. At the core of this approach lies the design of a novel type of energy function. This energy function has local minima with native structure characteristics and wide basins of attraction. The current work presents our motivation for deriving such an energy function and also tests the derived energy function. Our approach is novel in that it takes advantage of the inherently rough energy landscape of proteins, which is generally considered a major obstacle for protein structure prediction. When local minima have wide basins of attraction, the protein’s conformation space can be greatly reduced by the convergence of large regions of the space into single points, namely the local minima corresponding to these funnels. We have implemented this concept by an iterative process. The potential is first used to generate decoy sets and then we study these sets of decoys to guide further development of the potential. A key feature of our potential is the use of cooperative multi-body interactions that mimic the role of the entropic and solvent contributions to the free energy. The validity and value of our approach is demonstrated by applying it to 14 diverse, small proteins. We show that, for these proteins, the size of conformation space is considerably reduced by the new energy function. In fact, the reduction is so substantial as to allow efficient conformational sampling. As a result we are able to find a significant number of near-native conformations in random searches performed with limited computational resources. PMID:12742025
Lotterhos, Katie E; Whitlock, Michael C
2015-03-01
Although genome scans have become a popular approach towards understanding the genetic basis of local adaptation, the field still does not have a firm grasp on how sampling design and demographic history affect the performance of genome scans on complex landscapes. To explore these issues, we compared 20 different sampling designs in equilibrium (i.e. island model and isolation by distance) and nonequilibrium (i.e. range expansion from one or two refugia) demographic histories in spatially heterogeneous environments. We simulated spatially complex landscapes, which allowed us to exploit local maxima and minima in the environment in 'pair' and 'transect' sampling strategies. We compared F(ST) outlier and genetic-environment association (GEA) methods for each of two approaches that control for population structure: with a covariance matrix or with latent factors. We show that while the relative power of two methods in the same category (F(ST) or GEA) depended largely on the number of individuals sampled, overall GEA tests had higher power in the island model and F(ST) had higher power under isolation by distance. In the refugia models, however, these methods varied in their power to detect local adaptation at weakly selected loci. At weakly selected loci, paired sampling designs had equal or higher power than transect or random designs to detect local adaptation. Our results can inform sampling designs for studies of local adaptation and have important implications for the interpretation of genome scans based on landscape data. PMID:25648189
NASA Astrophysics Data System (ADS)
Obuchowski, Jakub; Wyłomańska, Agnieszka; Zimroz, Radosław
2014-06-01
In this paper a new method of fault detection in rotating machinery is presented. It is based on a vibration time series analysis in time-frequency domain. A raw vibration signal is decomposed via the short-time Fourier transform (STFT). The time-frequency map is considered as matrix (M×N) with N sub-signals with length M. Each sub-signal is considered as a time series and might be interpreted as energy variation for narrow frequency bins. Each sub-signal is processed using a novel approach called the local maxima method. Basically, we search for local maxima because they should appear in the signal if local damage in bearings or gearbox exists. Finally, information for all sub-signals is combined in order to validate impulsive behavior of energy. Due to random character of the obtained time series, each maximum occurrence has to be checked for its significance. If there are time points for which the average number of local maxima for all sub-signals is significantly higher than for the other time instances, then location of these maxima is “weighted” as more important (at this time instance local maxima create for a set of Δf a pattern on the time-frequency map). This information, called vector of weights, is used for enhancement of spectrogram. When vector of weights is applied for spectrogram, non-informative energy is suppressed while informative features on spectrogram are enhanced. If the distribution of local maxima on spectrogram creates a pattern of wide-band cyclic energy growth, the machine is suspected of being damaged. For healthy condition, the vector of the average number of maxima for each time point should not have outliers, aggregation of information from all sub-signals is rather random and does not create any pattern. The method is illustrated by analysis of very noisy both real and simulated signals.
Localization of buildings in airborne forward-looking infrared image using template matching method
NASA Astrophysics Data System (ADS)
Qin, Yueming; Cao, Zhiguo; Li, Hansong; Wang, Xiaojing
2013-03-01
This paper proposes a new approach to localize buildings from forward looking infrared (FLIR) images. The proposed approach can localize not only large buildings, but also small buildings. Furthermore, the proposed approach is also robust with those FLIR images degraded by clouds. This breakthrough is due to the following improvements: (1) the Histogram of Oriented Gradients approach is improved to match FLIR images with our templates; (2) a new kind of feature image is presented to reduce the difference between template and target; (3) we project 3D building models into images, with different colors on different sides, distinguishing those sides apart; (4) we generate templates which contain all buildings in the visual field. As a result, the FLIR images can be matched with the big templates at a high correct rate, and then target buildings can be localized. The experimental results show the superior performance of the proposed approach.
Development of a spatial method for weed detection and localization
NASA Astrophysics Data System (ADS)
Vioix, Jean-Baptiste; Douzals, Jean-Paul; Truchetet, Fréd. éric
2004-02-01
This paper presents an algorithm specifically developed for filtering low frequency signals. The application is related to weed detection into aerial images where crop lines are detected as repetitive structures. Theoretical bases of this work are presented first. Then, two methods are compared to select low frequency signals and their limitations are described. A decomposition based on wavelet packet is used to combine advantages of both methods. This algorithm allows a high selectivity of low frequency signals with an interesting computation time. At last, a complete algorithm for weed/crop classification is explained and a few results are shown.
A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps
Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun
2014-01-01
In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290
A multi-modal face recognition method using complete local derivative patterns and depth maps.
Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun
2014-01-01
In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290
Approaches to local connectivity in autism using resting state functional connectivity MRI.
Maximo, Jose O; Keown, Christopher L; Nair, Aarti; Müller, Ralph-Axel
2013-01-01
While the literature on aberrant long-distance connectivity in autism spectrum disorder (ASD) has grown fast over the past decade, little is known about local connectivity. We used regional homogeneity and local density approaches at different spatial scales to examine local connectivity in 29 children and adolescents with ASD and 29 matched typically developing participants, using resting state functional magnetic resonance imaging data. Across a total of 12 analysis pipelines, the gross pattern of between-group findings was overall stable, with local overconnectivity in the ASD group in occipital and posterior temporal regions and underconnectivity in middle/posterior cingulate, and medial prefrontal regions. This general pattern was confirmed in secondary analyses for low-motion subsamples (n = 20 per group), in which time series segments with >0.25 mm head motion were censored, as well as in an analysis including global signal regression. Local overconnectivity in visual regions appears consistent with preference for local over global visual processing previously reported in ASD, whereas cingulate and medial frontal underconnectivity may relate to aberrant function within the default mode network. PMID:24155702
Scheffer, Hester J; Melenhorst, Marleen C A M; Vogel, Jantien A; van Tilborg, Aukje A J M; Nielsen, Karin; Kazemier, Geert; Meijerink, Martijn R
2015-06-01
Irreversible electroporation (IRE) is a novel image-guided ablation technique that is increasingly used to treat locally advanced pancreatic carcinoma (LAPC). We describe a 67-year-old male patient with a 5 cm stage III pancreatic tumor who was referred for IRE. Because the ventral approach for electrode placement was considered dangerous due to vicinity of the tumor to collateral vessels and duodenum, the dorsal approach was chosen. Under CT-guidance, six electrodes were advanced in the tumor, approaching paravertebrally alongside the aorta and inferior vena cava. Ablation was performed without complications. This case describes that when ventral electrode placement for pancreatic IRE is impaired, the dorsal approach could be considered alternatively. PMID:25288173
Scheffer, Hester J. Melenhorst, Marleen C. A. M.; Vogel, Jantien A.; Tilborg, Aukje A. J. M. van; Nielsen, Karin Kazemier, Geert; Meijerink, Martijn R.
2015-06-15
Irreversible electroporation (IRE) is a novel image-guided ablation technique that is increasingly used to treat locally advanced pancreatic carcinoma (LAPC). We describe a 67-year-old male patient with a 5 cm stage III pancreatic tumor who was referred for IRE. Because the ventral approach for electrode placement was considered dangerous due to vicinity of the tumor to collateral vessels and duodenum, the dorsal approach was chosen. Under CT-guidance, six electrodes were advanced in the tumor, approaching paravertebrally alongside the aorta and inferior vena cava. Ablation was performed without complications. This case describes that when ventral electrode placement for pancreatic IRE is impaired, the dorsal approach could be considered alternatively.
Development of acoustic sniper localization methods and models
NASA Astrophysics Data System (ADS)
Grasing, David; Ellwood, Benjamin
2010-04-01
A novel examination of a method capable of providing situational awareness of sniper fire from small arms fire is presented. Situational Awareness (SA) information is extracted by exploiting two distinct sounds created by small arms discharge: the muzzle blast (created when the bullet leaves the barrel of the gun) and the shockwave (sound created by a supersonic bullet). The direction of arrival associated with the muzzle blast will always point in the direction of the shooter. Range can be estimated from the muzzle blast alone, however at greater distances geometric dilution of precision will make obtaining accurate range estimates difficult. To address this issue, additional information obtained from the shockwave is utilized in order to estimate range to shooter. The focus of the paper is the development of a shockwave propagation model, the development of ballistics models (based off empirical measurements), and the subsequent application towards methods of determining shooter position. Knowledge of the rounds ballistics is required to estimate range to shooter. Many existing methods rely on extracting information from the shockwave in an attempt to identify the round type and thus the ballistic model to use ([1]). It has been our experience that this information becomes unreliable at greater distances or in high noise environments. Our method differs from existing solutions in that classification of the round type is not required, thus making the proposed solution more robust. Additionally, we demonstrate that sufficient accuracy can be achieved without the need to classify the round.
Gaussian Process Regression Plus Method for Localization Reliability Improvement.
Liu, Kehan; Meng, Zhaopeng; Own, Chung-Ming
2016-01-01
Location data are among the most widely used context data in context-aware and ubiquitous computing applications. Many systems with distinct deployment costs and positioning accuracies have been developed over the past decade for indoor positioning. The most useful method is focused on the received signal strength and provides a set of signal transmission access points. However, compiling a manual measuring Received Signal Strength (RSS) fingerprint database involves high costs and thus is impractical in an online prediction environment. The system used in this study relied on the Gaussian process method, which is a nonparametric model that can be characterized completely by using the mean function and the covariance matrix. In addition, the Naive Bayes method was used to verify and simplify the computation of precise predictions. The authors conducted several experiments on simulated and real environments at Tianjin University. The experiments examined distinct data size, different kernels, and accuracy. The results showed that the proposed method not only can retain positioning accuracy but also can save computation time in location predictions. PMID:27483276
Methods for quantitative determination of drug localized in the skin.
Touitou, E; Meidan, V M; Horwitz, E
1998-12-01
The quantification of drugs within the skin is essential for topical and transdermal delivery research. Over the last two decades, horizontal sectioning, consisting of both tape stripping and parallel slicing through the deeper tissues has constituted the traditional investigative technique. In recent years, this methodology has been augmented by such procedures as heat separation, qualitative autoradiography, isolation of the pilosebaceous units and the use of induced follicle-free skin. The development of skin quantitative autoradiography represents an entirely novel approach which permits quantification and visualization of the penetrant throughout a vertical cross-section of skin. Noninvasive strategies involve the application of optical measuring systems such as attenuated total reflectance Fourier transform infrared, fluorescence, remittance or photothermal spectroscopies. PMID:9801425
Multidimensional Programming Methods for Energy Facility Siting: Alternative Approaches
NASA Technical Reports Server (NTRS)
Solomon, B. D.; Haynes, K. E.
1982-01-01
The use of multidimensional optimization methods in solving power plant siting problems, which are characterized by several conflicting, noncommensurable objectives is addressed. After a discussion of data requirements and exclusionary site screening methods for bounding the decision space, classes of multiobjective and goal programming models are discussed in the context of finite site selection. Advantages and limitations of these approaches are highlighted and the linkage of multidimensional methods with the subjective, behavioral components of the power plant siting process is emphasized.
A noninvasive method to estimate pulse wave velocity in arteries locally by means of ultrasound.
Brands, P J; Willigers, J M; Ledoux, L A; Reneman, R S; Hoeks, A P
1998-11-01
Noninvasive evaluation of vessel wall properties in humans is hampered by the absence of methods to assess directly local distensibility, compliance, and Young's modulus. Contemporary ultrasound methods are capable of assessing end-diastolic artery diameter, the local change in artery diameter as a function of time, and local wall thickness. However, to assess vessel wall properties of the carotid artery, for example, the pulse pressure in the brachial artery still must be used as a substitute for local pulse pressure. The assessment of local pulse wave velocity as described in the present article provides a direct estimate of local vessel wall properties (distensibility, compliance, and Young's modulus) and, in combination with the relative change in artery cross-sectional area, an estimate of the local pulse pressure. The local pulse wave velocity is obtained by processing radio frequency ultrasound signals acquired simultaneously along two M-lines spaced at a known distance along the artery. A full derivation and mathematical description of the method to assess local pulse wave velocity, using the temporal and longitudinal gradients of the change in diameter, are presented. A performance evaluation of the method was carried out by means of experiments in an elastic tube under pulsatile pressure conditions. It is concluded that, in a phantom set-up, the assessed local pulse wave velocity provides reliable estimates for local distensibility. PMID:10385955
Local search methods based on variable focusing for random K-satisfiability.
Lemoy, Rémi; Alava, Mikko; Aurell, Erik
2015-01-01
We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed. PMID:25679737
Local search methods based on variable focusing for random K -satisfiability
NASA Astrophysics Data System (ADS)
Lemoy, Rémi; Alava, Mikko; Aurell, Erik
2015-01-01
We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed.
Le Cam, Steven; Caune, Vairis; Ranta, Radu; Korats, Gundars; Louis-Dorr, Valerie
2015-08-01
The brain source localization problem has been extensively studied in the past years, yielding a large panel of methodologies, each bringing their own strengths and weaknesses. Combining several of these approaches might help in enhancing their respective performance. Our study is carried out in the particular context of intracranial recordings, with the objective to explain the measurements based on a reduced number of dipolar activities. We take benefit of the sparse nature of the Bayesian approaches to separate the noise from the source space, and to distinguish between several source contributions on the electrodes. This first step provides accurate estimates of the dipole projections, which can be used as an entry to an equivalent current dipole fitting procedure. We demonstrate on simulations that the localization results are significantly enhanced by this post-processing step when up to five dipoles are activated simultaneously. PMID:26736344
The Local Integrity Approach for Urban Contexts: Definition and Vehicular Experimental Assessment.
Margaria, Davide; Falletti, Emanuela
2016-01-01
A novel cooperative integrity monitoring concept, called "local integrity", suitable to automotive applications in urban scenarios, is discussed in this paper. The idea is to take advantage of a collaborative Vehicular Ad hoc NETwork (VANET) architecture in order to perform a spatial/temporal characterization of possible degradations of Global Navigation Satellite System (GNSS) signals. Such characterization enables the computation of the so-called "Local Protection Levels", taking into account local impairments to the received signals. Starting from theoretical concepts, this paper describes the experimental validation by means of a measurement campaign and the real-time implementation of the algorithm on a vehicular prototype. A live demonstration in a real scenario has been successfully carried out, highlighting effectiveness and performance of the proposed approach. PMID:26821028
The Local Integrity Approach for Urban Contexts: Definition and Vehicular Experimental Assessment
Margaria, Davide; Falletti, Emanuela
2016-01-01
A novel cooperative integrity monitoring concept, called “local integrity”, suitable to automotive applications in urban scenarios, is discussed in this paper. The idea is to take advantage of a collaborative Vehicular Ad hoc NETwork (VANET) architecture in order to perform a spatial/temporal characterization of possible degradations of Global Navigation Satellite System (GNSS) signals. Such characterization enables the computation of the so-called “Local Protection Levels”, taking into account local impairments to the received signals. Starting from theoretical concepts, this paper describes the experimental validation by means of a measurement campaign and the real-time implementation of the algorithm on a vehicular prototype. A live demonstration in a real scenario has been successfully carried out, highlighting effectiveness and performance of the proposed approach. PMID:26821028
A transfer matrix approach to vibration localization in mistuned blade assemblies
NASA Technical Reports Server (NTRS)
Ottarson, Gisli; Pierre, Chritophe
1993-01-01
A study of mode localization in mistuned bladed disks is performed using transfer matrices. The transfer matrix approach yields the free response of a general, mono-coupled, perfectly cyclic assembly in closed form. A mistuned structure is represented by random transfer matrices, and the expansion of these matrices in terms of the small mistuning parameter leads to the definition of a measure of sensitivity to mistuning. An approximation of the localization factor, the spatially averaged rate of exponential attenuation per blade-disk sector, is obtained through perturbation techniques in the limits of high and low sensitivity. The methodology is applied to a common model of a bladed disk and the results verified by Monte Carlo simulations. The easily calculated sensitivity measure may prove to be a valuable design tool due to its system-independent quantification of mistuning effects such as mode localization.
Flow equation approach to one-body and many-body localization
NASA Astrophysics Data System (ADS)
Quito, Victor; Bhattacharjee, Paraj; Pekker, David; Refael, Gil
2014-03-01
We study one-body and many-body localization using the flow equation technique applied to spin-1/2 Hamiltonians. This technique, first introduced by Wegner, allows us to exact diagonalize interacting systems by solving a set of first-order differential equations for coupling constants. Besides, by the flow of individual operators we also compute physical properties, such as correlation and localization lengths, by looking at the flow of probability distributions of couplings in the Hilbert space. As a first example, we analyze the one-body localization problem written in terms of spins, the disordered XY model with a random transverse field. We compare the results obtained in the flow equation approach with the diagonalization in the fermionic language. For the many-body problem, we investigate the physical properties of the disordered XXZ Hamiltonian with a random transverse field in the z-direction.
Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V. E-mail: candler@aem.umn.edu; Truhlar, Donald G. E-mail: candler@aem.umn.edu
2014-02-07
Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with a review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.
Connectometry: A statistical approach harnessing the analytical potential of the local connectome.
Yeh, Fang-Cheng; Badre, David; Verstynen, Timothy
2016-01-15
Here we introduce the concept of the local connectome: the degree of connectivity between adjacent voxels within a white matter fascicle defined by the density of the diffusing spins. While most human structural connectomic analyses can be summarized as finding global connectivity patterns at either end of anatomical pathways, the analysis of local connectomes, termed connectometry, tracks the local connectivity patterns along the fiber pathways themselves in order to identify the subcomponents of the pathways that express significant associations with a study variable. This bottom-up analytical approach is made possible by reconstructing diffusion MRI data into a common stereotaxic space that allows for associating local connectomes across subjects. The substantial associations can then be tracked along the white matter pathways, and statistical inference is obtained using permutation tests on the length of coherent associations and corrected for multiple comparisons. Using two separate samples, with different acquisition parameters, we show how connectometry can capture variability within core white matter pathways in a statistically efficient manner and extract meaningful variability from white matter pathways, complements graph-theoretic connectomic measures, and is more sensitive than region-of-interest approaches. PMID:26499808
Global/local methods research using a common structural analysis framework
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. H., Jr.; Thompson, Danniella M.
1991-01-01
Methodologies for global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.
New Method for Studying Localization effects in Quantum Hall Systems
NASA Astrophysics Data System (ADS)
Bhatt, R. N.; Geraedts, Scott
Disorder is central to the study of the fractional quantum Hall effect. It is responsible for the finite width of the quantum Hall plateaus, and it is of course present in experiment. Numerical studies of the disordered fractional quantum Hall effect are nonetheless very difficult, because the lack of symmetry present in clean systems limits the size of systems that can be studied. We introduce a new method for studying the integer and fractional quantum Hall effect in the presence of disorder that allows larger system sizes to be studied. The method relies on truncating the single particle Hilbert space, which leads to an exponential reduction in the Hilbert space of the many-particle system while preserving the essential topological nature of the state. We apply the model to the study of disorder transitions in the quantum Hall effect, both for the ground state and excited states. This work was supported by the US Department of Energy, Office of Basic Energy Sciences, through Grant DE-SC0002140.
Shahbazi Avarvand, Forooz; Ewald, Arne; Nolte, Guido
2012-01-01
To address the problem of mixing in EEG or MEG connectivity analysis we exploit that noninteracting brain sources do not contribute systematically to the imaginary part of the cross-spectrum. Firstly, we propose to apply the existing subspace method "RAP-MUSIC" to the subspace found from the dominant singular vectors of the imaginary part of the cross-spectrum rather than to the conventionally used covariance matrix. Secondly, to estimate the specific sources interacting with each other, we use a modified LCMV-beamformer approach in which the source direction for each voxel was determined by maximizing the imaginary coherence with respect to a given reference. These two methods are applicable in this form only if the number of interacting sources is even, because odd-dimensional subspaces collapse to even-dimensional ones. Simulations show that (a) RAP-MUSIC based on the imaginary part of the cross-spectrum accurately finds the correct source locations, that (b) conventional RAP-MUSIC fails to do so since it is highly influenced by noninteracting sources, and that (c) the second method correctly identifies those sources which are interacting with the reference. The methods are also applied to real data for a motor paradigm, resulting in the localization of four interacting sources presumably in sensory-motor areas. PMID:22919429
NASA Astrophysics Data System (ADS)
Lee, S.; Maharani, Y. N.; Ki, S. J.
2015-12-01
The application of Self-Organizing Map (SOM) to analyze social vulnerability to recognize the resilience within sites is a challenging tasks. The aim of this study is to propose a computational method to identify the sites according to their similarity and to determine the most relevant variables to characterize the social vulnerability in each cluster. For this purposes, SOM is considered as an effective platform for analysis of high dimensional data. By considering the cluster structure, the characteristic of social vulnerability of the sites identification can be fully understand. In this study, the social vulnerability variable is constructed from 17 variables, i.e. 12 independent variables which represent the socio-economic concepts and 5 dependent variables which represent the damage and losses due to Merapi eruption in 2010. These variables collectively represent the local situation of the study area, based on conducted fieldwork on September 2013. By using both independent and dependent variables, we can identify if the social vulnerability is reflected onto the actual situation, in this case, Merapi eruption 2010. However, social vulnerability analysis in the local communities consists of a number of variables that represent their socio-economic condition. Some of variables employed in this study might be more or less redundant. Therefore, SOM is used to reduce the redundant variable(s) by selecting the representative variables using the component planes and correlation coefficient between variables in order to find the effective sample size. Then, the selected dataset was effectively clustered according to their similarities. Finally, this approach can produce reliable estimates of clustering, recognize the most significant variables and could be useful for social vulnerability assessment, especially for the stakeholder as decision maker. This research was supported by a grant 'Development of Advanced Volcanic Disaster Response System considering
A New Local Modelling Approach Based on Predicted Errors for Near-Infrared Spectral Analysis
Chang, Haitao; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu
2016-01-01
Over the last decade, near-infrared spectroscopy, together with the use of chemometrics models, has been widely employed as an analytical tool in several industries. However, most chemical processes or analytes are multivariate and nonlinear in nature. To solve this problem, local errors regression method is presented in order to build an accurate calibration model in this paper, where a calibration subset is selected by a new similarity criterion which takes the full information of spectra, chemical property, and predicted errors. After the selection of calibration subset, the partial least squares regression is applied to build calibration model. The performance of the proposed method is demonstrated through a near-infrared spectroscopy dataset of pharmaceutical tablets. Compared with other local strategies with different similarity criterions, it has been shown that the proposed local errors regression can result in a significant improvement in terms of both prediction ability and calculation speed. PMID:27446631
A Coproduction Community Based Approach to Reducing Smoking Prevalence in a Local Community Setting
McGeechan, G. J.; Woodall, D.; Anderson, L.; Wilson, L.; O'Neill, G.; Newbury-Birch, D.
2016-01-01
Research highlights that asset-based community development where local residents become equal partners in service development may help promote health and well-being. This paper outlines baseline results of a coproduction evaluation of an asset-based approach to improving health and well-being within a small community through promoting tobacco control. Local residents were recruited and trained as community researchers to deliver a smoking prevalence survey within their local community and became local health champions, promoting health and well-being. The results of the survey will be used to inform health promotion activities within the community. The local smoking prevalence was higher than the regional and national averages. Half of the households surveyed had at least one smoker, and 63.1% of children lived in a smoking household. Nonsmokers reported higher well-being than smokers; however, the differences were not significant. Whilst the community has a high smoking prevalence, more than half of the smokers surveyed would consider quitting. Providing smoking cessation advice in GP surgeries may help reduce smoking prevalence in this community. Work in the area could be done to reduce children's exposure to smoking in the home. PMID:27446219
Various contact approaches for the finite cell method
NASA Astrophysics Data System (ADS)
Konyukhov, Alexander; Lorenz, Christian; Schweizerhof, Karl
2015-08-01
The finite cell method (FCM) provides a method for the computation of structures which can be described as a mixture of high-order FEM and a special integration technique. The method is one of the novel computational methods and is highly developed within the last decade. One of the major problems of FCM is the description of boundary conditions inside cells as well as in sub-cells. And a completely open problem is the description of contact. Therefore, the motivation of the current work is to develop a set of computational contact mechanics approaches which will be effective for the finite element cell method. Thus, for the FCM method we are developing and testing hereby focusing on the Hertz problem the following algorithms: direct integration in the cell method, allowing the fastest implementation, but suffering from numerical artifacts such as the "stamp effect"; the most efficient scheme concerning approximation properties the cell-surface-to-analytical-surface contact element designed for contact with rigid bodies leading to cell-wisely contact elements; and finally the discrete-cell-to-cell contact approach based on the finite discrete method. All developed methods are carefully verified with the analytical Hertz solution. The cell subdivisions, the order of the shape functions as well as the selection of the classes for shape functions are investigated for all developed contact approaches. This analysis allows to choose the most robust approach depending on the needs of the user such as correct representation of the stresses, or only satisfaction of geometrical non-penetration conditions.
a Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction
NASA Astrophysics Data System (ADS)
Stentoumis, C.; Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Karras, G.
2013-02-01
In recent years, a demand for 3D models of various scales and precisions has been growing for a wide range of applications; among them, cultural heritage recording is a particularly important and challenging field. We outline an automatic 3D reconstruction pipeline, mainly focusing on dense stereo-matching which relies on a hierarchical, local optimization scheme. Our matching framework consists of a combination of robust cost measures, extracted via an intuitive cost aggregation support area and set within a coarse-tofine strategy. The cost function is formulated by combining three individual costs: a cost computed on an extended census transformation of the images; the absolute difference cost, taking into account information from colour channels; and a cost based on the principal image derivatives. An efficient adaptive method of aggregating matching cost for each pixel is then applied, relying on linearly expanded cross skeleton support regions. Aggregated cost is smoothed via a 3D Gaussian function. Finally, a simple "winnertakes- all" approach extracts the disparity value with minimum cost. This keeps algorithmic complexity and system computational requirements acceptably low for high resolution images (or real-time applications), when compared to complex matching functions of global formulations. The stereo algorithm adopts a hierarchical scheme to accommodate high-resolution images and complex scenes. In a last step, a robust post-processing work-flow is applied to enhance the disparity map and, consequently, the geometric quality of the reconstructed scene. Successful results from our implementation, which combines pre-existing algorithms and novel considerations, are presented and evaluated on the Middlebury platform.
NASA Technical Reports Server (NTRS)
Choudhari, Meelan; Street, Craig L.
1991-01-01
Previous theoretical work on the boundary layer receptivity problem has utilized large Reynolds number asymptotic theories, thus being limited to a narrow part of the frequency - Reynolds number domain. An alternative approach is presented for the prediction of localized instability generation which has a general applicability, and also accounts for finite Reynolds number effects. This approach is illustrated for the case of Tollmien-Schlichting wave generation in a Blasius boundary layer due to the interaction of a free stream acoustic wave with a region of short scale variation in the surface boundary condition. The specific types of wall inhomogeneities studied are: regions of short scale variations in wall suction, wall admittance, and wall geometry (roughness). Extensive comparison is made between the results of the finite Reynolds number approach and previous asymptotic predictions, which also suggests an alternative way of using the latter at Reynolds numbers of interest in practice.
A hybrid approach for efficient anomaly detection using metaheuristic methods.
Ghanem, Tamer F; Elkilani, Wail S; Abdul-Kader, Hatem M
2015-07-01
Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms. PMID:26199752
The Feldenkrais Method: A Dynamic Approach to Changing Motor Behavior.
ERIC Educational Resources Information Center
Buchanan, Patricia A.; Ulrich, Beverly D.
2001-01-01
Describes the Feldenkrais Method of somatic education, noting parallels with a dynamic systems theory (DST) approach to motor behavior. Feldenkrais uses movement and perception to foster individualized improvement in function. DST explains that a human-environment system continually adapts to changing conditions and assembles behaviors…
Teaching with the Case Method Online: Pure versus Hybrid Approaches
ERIC Educational Resources Information Center
Webb, Harold W.; Gill, Grandon; Poe, Gary
2005-01-01
The impact of hybrid classroom/distance education approaches is examined in the context of the case method. Four distinct semester-long treatments, which varied mixes of classroom and online discussion, were used to teach a graduate MIS survey course. Specific findings suggest that by using Web technology, college instructors may offer students…
Millimeter-Wave Localizers for Aircraft-to-Aircraft Approach Navigation
NASA Technical Reports Server (NTRS)
Tang, Adrian J.
2013-01-01
Aerial refueling technology for both manned and unmanned aircraft is critical for operations where extended aircraft flight time is required. Existing refueling assets are typically manned aircraft, which couple to a second aircraft through the use of a refueling boom. Alignment and mating of the two aircraft continues to rely on human control with use of high-resolution cameras. With the recent advances in unmanned aircraft, it would be highly advantageous to remove/reduce human control from the refueling process, simplifying the amount of remote mission management and enabling new operational scenarios. Existing aerial refueling uses a camera, making it non-autonomous and prone to human error. Existing commercial localizer technology has proven robust and reliable, but not suited for aircraft-to-aircraft approaches like in aerial refueling scenarios since the resolution is too coarse (approximately one meter). A localizer approach system for aircraft-to-aircraft docking can be constructed using the same modulation with a millimeterwave carrier to provide high resolution. One technology used to remotely align commercial aircraft on approach to a runway are ILS (instrument landing systems). ILS have been in service within the U.S. for almost 50 years. In a commercial ILS, two partially overlapping beams of UHF (109 to 126 MHz) are broadcast from an antenna array so that their overlapping region defines the centerline of the runway. This is called a localizer system and is responsible for horizontal alignment of the approach. One beam is modulated with a 150-Hz tone, while the other with a 90-Hz tone. Through comparison of the modulation depths of both tones, an autopilot system aligns the approaching aircraft with the runway centerline. A similar system called a glide-slope (GS) exists in the 320-to-330MHz band for vertical alignment of the approach. While this technology has been proven reliable for millions of commercial flights annually, its UHF nature limits
Multiscale Energy and Eigenspace Approach to Detection and Localization of Myocardial Infarction.
Sharma, L N; Tripathy, R K; Dandapat, S
2015-07-01
In this paper, a novel technique on a multiscale energy and eigenspace (MEES) approach is proposed for the detection and localization of myocardial infarction (MI) from multilead electrocardiogram (ECG). Wavelet decomposition of multilead ECG signals grossly segments the clinical components at different subbands. In MI, pathological characteristics such as hypercute T-wave, inversion of T-wave, changes in ST elevation, or pathological Q-wave are seen in ECG signals. This pathological information alters the covariance structures of multiscale multivariate matrices at different scales and the corresponding eigenvalues. The clinically relevant components can be captured by eigenvalues. In this study, multiscale wavelet energies and eigenvalues of multiscale covariance matrices are used as diagnostic features. Support vector machines (SVMs) with both linear and radial basis function (RBF) kernel and K-nearest neighbor are used as classifiers. Datasets, which include healthy control, and various types of MI, such as anterior, anteriolateral, anterioseptal, inferior, inferiolateral, and inferioposterio-lateral, from the PTB diagnostic ECG database are used for evaluation. The results show that the proposed technique can successfully detect the MI pathologies. The MEES approach also helps localize different types of MIs. For MI detection, the accuracy, the sensitivity, and the specificity values are 96%, 93%, and 99% respectively. The localization accuracy is 99.58%, using a multiclass SVM classifier with RBF kernel. PMID:26087076
Kurz, Jochen H
2015-12-01
The task of locating a source in space by measuring travel time differences of elastic or electromagnetic waves from the source to several sensors is evident in varying fields. The new concepts of automatic acoustic emission localization presented in this article are based on developments from geodesy and seismology. A detailed description of source location determination in space is given with the focus on acoustic emission data from concrete specimens. Direct and iterative solvers are compared. A concept based on direct solvers from geodesy extended by a statistical approach is described which allows a stable source location determination even for partly erroneous onset times. The developed approach is validated with acoustic emission data from a large specimen leading to travel paths up to 1m and therefore to noisy data with errors in the determined onsets. The adaption of the algorithms from geodesy to the localization procedure of sources of elastic waves offers new possibilities concerning stability, automation and performance of localization results. Fracture processes can be assessed more accurately. PMID:26233938
Dholakia, Avani S.; Kumar, Rachit; Raman, Siva P.; Moore, Joseph A.; Ellsworth, Susannah; McNutt, Todd; Laheru, Daniel A.; Jaffee, Elizabeth; Cameron, John L.; Tran, Phuoc T.; Hobbs, Robert F.; Wolfgang, Christopher L.; and others
2013-12-01
Purpose: To generate a map of local recurrences after pancreaticoduodenectomy (PD) for patients with resectable pancreatic ductal adenocarcinoma (PDA) and to model an adjuvant radiation therapy planning treatment volume (PTV) that encompasses a majority of local recurrences. Methods and Materials: Consecutive patients with resectable PDA undergoing PD and 1 or more computed tomography (CT) scans more than 60 days after PD at our institution were reviewed. Patients were divided into 3 groups: no adjuvant treatment (NA), chemotherapy alone (CTA), or chemoradiation (CRT). Cross-sectional scans were centrally reviewed, and local recurrences were plotted to scale with respect to the celiac axis (CA), superior mesenteric artery (SMA), and renal veins on 1 CT scan of a template post-PD patient. An adjuvant clinical treatment volume comprising 90% of local failures based on standard expansions of the CA and SMA was created and simulated on 3 post-PD CT scans to assess the feasibility of this planning approach. Results: Of the 202 patients in the study, 40 (20%), 34 (17%), and 128 (63%) received NA, CTA, and CRT adjuvant therapy, respectively. The rate of margin-positive resections was greater in CRT patients than in CTA patients (28% vs 9%, P=.023). Local recurrence occurred in 90 of the 202 patients overall (45%) and in 19 (48%), 22 (65%), and 49 (38%) in the NA, CTA, and CRT groups, respectively. Ninety percent of recurrences were within a 3.0-cm right-lateral, 2.0-cm left-lateral, 1.5-cm anterior, 1.0-cm posterior, 1.0-cm superior, and 2.0-cm inferior expansion of the combined CA and SMA contours. Three simulated radiation treatment plans using these expansions with adjustments to avoid nearby structures were created to demonstrate the use of this treatment volume. Conclusions: Modified PTVs targeting high-risk areas may improve local control while minimizing toxicities, allowing dose escalation with intensity-modulated or stereotactic body radiation therapy.
New approaches for the calibration of exchange-energy densities in local hybrid functionals.
Maier, Toni M; Haasler, Matthias; Arbuznikov, Alexei V; Kaupp, Martin
2016-08-21
The ambiguity of exchange-energy densities is a fundamental challenge for the development of local hybrid functionals, or of other functionals based on a local mixing of exchange-energy densities. In this work, a systematic construction of semi-local calibration functions (CFs) for adjusting the exchange-energy densities in local hybrid functionals is provided, which directly links a given CF to an underlying semi-local exchange functional, as well as to the second-order gradient expansion of the exchange hole. Using successive steps of integration by parts allows the derivation of correction terms of increasing order, resulting in more and more complicated but also more flexible CFs. We derive explicit first- and second-order CFs (pig1 and pig2) based on B88 generalized-gradient approximation (GGA) exchange, and a first-order CF (tpig1) based on τ-dependent B98 meta-GGA exchange. We combine these CFs with different long-range damping functions and evaluate them for calibration of LDA, B88 GGA, and TPSS meta-GGA exchange-energy densities. Based on a minimization of unphysical nondynamical correlation contributions in three noble-gas dimer potential-energy curves, free parameters in the CFs are optimized, and performance of various approaches in the calibration of different exchange-energy densities is compared. Most notably, the second-order pig2 CF provides the largest flexibility with respect to the diffuseness of the damping function. This suggests that higher-order CFs based on the present integration-by-parts scheme may be particularly suitable for the flexible construction of local hybrid functionals. PMID:27080804
Wiese, Heike; Kuhlmann, Katja; Wiese, Sebastian; Stoepel, Nadine S; Pawlas, Magdalena; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin; Drepper, Friedel; Warscheid, Bettina
2014-02-01
Over the past years, phosphoproteomics has advanced to a prime tool in signaling research. Since then, an enormous amount of information about in vivo protein phosphorylation events has been collected providing a treasure trove for gaining a better understanding of the molecular processes involved in cell signaling. Yet, we still face the problem of how to achieve correct modification site localization. Here we use alternative fragmentation and different bioinformatics approaches for the identification and confident localization of phosphorylation sites. Phosphopeptide-enriched fractions were analyzed by multistage activation, collision-induced dissociation and electron transfer dissociation (ETD), yielding complementary phosphopeptide identifications. We further found that MASCOT, OMSSA and Andromeda each identified a distinct set of phosphopeptides allowing the number of site assignments to be increased. The postsearch engine SLoMo provided confident phosphorylation site localization, whereas different versions of PTM-Score integrated in MaxQuant differed in performance. Based on high-resolution ETD and higher collisional dissociation (HCD) data sets from a large synthetic peptide and phosphopeptide reference library reported by Marx et al. [Nat. Biotechnol. 2013, 31 (6), 557-564], we show that an Andromeda/PTM-Score probability of 1 is required to provide an false localization rate (FLR) of 1% for HCD data, while 0.55 is sufficient for high-resolution ETD spectra. Additional analyses of HCD data demonstrated that for phosphotyrosine peptides and phosphopeptides containing two potential phosphorylation sites, PTM-Score probability cutoff values of <1 can be applied to ensure an FLR of 1%. Proper adjustment of localization probability cutoffs allowed us to significantly increase the number of confident sites with an FLR of <1%.Our findings underscore the need for the systematic assessment of FLRs for different score values to report confident modification site
Non-local total variation method for despeckling of ultrasound images
NASA Astrophysics Data System (ADS)
Feng, Jianbin; Ding, Mingyue; Zhang, Xuming
2014-03-01
Despeckling of ultrasound images, as a very active topic research in medical image processing, plays an important or even indispensable role in subsequent ultrasound image processing. The non-local total variation (NLTV) method has been widely applied to denoising images corrupted by Gaussian noise, but it cannot provide satisfactory restoration results for ultrasound images corrupted by speckle noise. To address this problem, a novel non-local total variation despeckling method is proposed for speckle reduction. In the proposed method, the non-local gradient is computed on the images restored by the optimized Bayesian non-local means (OBNLM) method and it is introduced into the total variation method to suppress speckle in ultrasound images. Comparisons of the restoration performance are made among the proposed method and such state-of-the-art despeckling methods as the squeeze box filter (SBF), the non-local means (NLM) method and the OBNLM method. The quantitative comparisons based on synthetic speckled images show that the proposed method can provide higher Peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) than compared despeckling methods. The subjective visual comparisons based on synthetic and real ultrasound images demonstrate that the proposed method outperforms other compared algorithms in that it can achieve better performance of noise reduction, artifact avoidance, edge and texture preservation.
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
Stellar mass functions: methods, systematics and results for the local Universe
NASA Astrophysics Data System (ADS)
Weigel, Anna K.; Schawinski, Kevin; Bruderer, Claudio
2016-06-01
We present a comprehensive method for determining stellar mass functions, and apply it to samples in the local Universe. We combine the classical 1/Vmax approach with STY, a parametric maximum likelihood method and step-wise maximum likelihood, a non-parametric maximum likelihood technique. In the parametric approach, we are assuming that the stellar mass function can be modelled by either a single or a double Schechter function and we use a likelihood ratio test to determine which model provides a better fit to the data. We discuss how the stellar mass completeness as a function of z biases the three estimators and how it can affect, especially the low-mass end of the stellar mass function. We apply our method to Sloan Digital Sky Survey DR7 data in the redshift range from 0.02 to 0.06. We find that the entire galaxy sample is best described by a double Schechter function with the following parameters: log (M*/M⊙) = 10.79 ± 0.01, log (Φ ^{{ast }}_1/h^3 Mpc^{-3}) = -3.31 ± 0.20, α1 = -1.69 ± 0.10, log (Φ ^{{ast }}_2/h^3 Mpc^{-3}) = -2.01 ± 0.28 and α2 = -0.79 ± 0.04. We also use morphological classifications from Galaxy Zoo and halo mass, overdensity, central/satellite, colour and specific star formation rate measurements to split the galaxy sample into over 130 subsamples. We determine and present the stellar mass functions and the best-fitting Schechter function parameters for each of these subsamples.
NASA Astrophysics Data System (ADS)
Wu, C. T.; Wu, Youcai; Koishi, M.
2015-12-01
In this work, a strain-morphed nonlocal meshfree method is developed to overcome the computational challenges for the simulation of elastic-damage induced strain localization problem when the spatial domain integration is performed based on the background cells and Gaussian quadrature rule. The new method is established by introducing the decomposed strain fields from a meshfree strain smoothing to the penalized variational formulation. While the stabilization strain field circumvents the onerous zero-energy modes inherent in the direct nodal integration scheme, the regularization strain field aims to avoid the pathological localization of deformation in Galerkin meshfree solution using the weak-discontinuity approach. A strain morphing algorithm is introduced to couple the locality and non-locality of the decomposed strain approximations such that the continuity condition in the coupled strain field is met under the Galerkin meshfree framework using the direct nodal integration scheme. Three numerical benchmarks are examined to demonstrate the effectiveness and accuracy of the proposed method for the regularization of elastic-damage induced strain localization problems.
NASA Astrophysics Data System (ADS)
Burkhardt, Anke; Geissler, Stefan; Koch, Edmund
2010-04-01
In most industrial states a huge amount of newly hatched male layer chickens are usually killed immediately after hatching by maceration or gassing. The reason for killing most of the male chickens of egg producing races is their slow growth rate compared to races specialized on meat production. When the egg has been laid, the egg contains already a small disc of cells on the surface of the yolk known as the blastoderm. This region is about 4 - 5 mm in diameter and contains the information whether the chick becomes male or female and hence allows sexing of the chicks by spectroscopy and other methods in the unincubated state. Different imaging methods like sonography, 3D-X-ray micro computed tomography and magnetic resonance imaging were used for localization of the blastoderm until now, but found to be impractical for different reasons. Optical coherence tomography (OCT) enables micrometer-scale, subsurface imaging of biological tissue and could therefore be a suitable technique for an accurate localization. The intention of this study is to prove if OCT can be an appropriate approach for the precise in ovo localization.
Advanced numerical methods and software approaches for semiconductor device simulation
CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.
2000-03-23
In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.
Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation
Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.
2000-01-01
In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less
Dubayle, D; Viala, D
1996-04-26
An in vitro electrophysiological approach allowed the localization of the spinal respiratory generator (sRG) within the cervical cord of newborn rats. Rostral and caudal limits were determined through transections of the successive spinal segments. The sRG is mainly located in the C5 segment with a partial extension in the C4 and C6 segments. The presence of two lateralized sRG was found after a split of the brain stem cervical cord from T8 to C1. Spinal respiratory activity could be kept synchronous after this split in the right and left halves of the spinal cord. This spinal activity also displayed a bilateral synchrony on separated spinal cord preparations after a C1 transection with no split. These findings are the first attempt to localize the sRG and are discussed in terms of bilateral segmental coupling and of interactions between the medullary and the spinal respiratory generators. PMID:8817527
ERIC Educational Resources Information Center
Pesman, Haki; Ozdemir, Omer Faruk
2012-01-01
The purpose of this study is to explore not only the effect of context-based physics instruction on students' achievement and motivation in physics, but also how the use of different teaching methods influences it (interaction effect). Therefore, two two-level-independent variables were defined, teaching approach (contextual and non-contextual…
Sensitivity to local dipole fields in the CRAZED experiment: An approach to bright spot MRI
NASA Astrophysics Data System (ADS)
Faber, Cornelius; Heil, Carolin; Zahneisen, Benjamin; Balla, David Z.; Bowtell, Richard
2006-10-01
Local dipole fields such as those created by small iron-oxide particles are used to produce regions of low intensity (dark contrast) in many molecular magnetic resonance imaging applications. We have investigated, with computer simulations and experiments at 17.6 T, how the COSY revamped with asymmetric z-gradient echo detection (CRAZED) experiment that selects intermolecular double-quantum coherences can also be used to visualize such local dipole fields. Application of the coherence-selection gradient pulses parallel to the main magnetic field produced similar, dark contrast as conventional gradient echo imaging. Application of the gradient along the magic angle leads to total loss of signal intensity in homogeneous samples. In the presence of local dipole fields, the contrast was inverted and bright signals from the dipoles were observed over a very low background. Both simulations and experiments showed that the signal strongly decreased when a phase-cycle suppressing single-quantum coherences was employed. Therefore, we conclude that most of the signal comes from directly refocused magnetization or intermolecular single-quantum coherences. Finally, we demonstrate that bright contrast from local dipole fields can also be obtained, when the pair of coherence-selection gradient pulses is deliberately mismatched. Both methods allowed visualization of local dipole fields in phantoms in experimental times of about 3 min.
Assessing and Evaluating Multidisciplinary Translational Teams: A Mixed Methods Approach
Wooten, Kevin C.; Rose, Robert M.; Ostir, Glenn V.; Calhoun, William J.; Ameredes, Bill T.; Brasier, Allan R.
2014-01-01
A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team type taxonomy. Based on team maturation and scientific progress, teams were designated as: a) early in development, b) traditional, c) process focused, or d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored. PMID:24064432
The Health Role of Local Area Coordinators in Scotland: A Mixed Methods Study
ERIC Educational Resources Information Center
Brown, Michael; Karatzias, Thanos; O'Leary, Lisa
2013-01-01
The study set out to explore whether local area coordinators (LACs) and their managers view the health role of LACs as an essential component of their work and identify the health-related activities undertaken by LACs in Scotland. A mixed methods cross-sectional phenomenological study involving local authority service managers (n = 25) and LACs (n…
Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach
NASA Technical Reports Server (NTRS)
Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.
2003-01-01
A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.
Towards an Optimal Multi-Method Paleointensity Approach
NASA Astrophysics Data System (ADS)
de Groot, L. V.; Biggin, A. J.; Langereis, C. G.; Dekkers, M. J.
2014-12-01
Our recently proposed 'multi-method paleointensity approach' consists of at least IZZI-Thellier, MSP-DSC and pseudo-Thellier experiments, complemented with Microwave Thellier experiments for key flows or ages. All results are scrutinized by strict selection criteria to accept only the most reliable paleointensities. This approach yielded reliable estimates of the paleofield for ~70% of all cooling units sampled on Hawaii - an exceptionally high number for a paleointensity study on lavas. Furthermore the credibility of the obtained results is greatly enhanced if more methods mutually agree with in their experimental uncertainties. To further assess the success rate of this new approach, we applied it to two collections of (sub-)recent lavas from Tenerife and Gran Canaria (20 cooling units), and Terceira (Azores, 18 cooling units). Although the mineralogy and rock-magnetic properties of much of these flows seemed less favorable for paleointensity techniques compared to the Hawaiian samples, again the multi-method paleointensity approach yielded reliable estimates for 60-70% of all cooling units. One of the methods, the newly calibrated pseudo-Thellier method, proved to be an important element of our new paleointensity approach yielding reliable estimates for ~50% of the Hawaiian lavas sampled. Its applicability to other volcanic edifices, however, remained questionable. The results from the Canarian and Azorean volcanic edifices provide further constraints on this method's potential. For lavas that are rock-magnetically (i.e. susceptibility-vs-temperature behavior) akin to Hawaiian lavas, the same selection criterion and calibration formula yielded successful results - testifying to the veracity of this new paleointensity method. Besides methodological advances our new record for the Canary Islands also has geomagnetic implications. It reveals a dramatic increase in the intensity of the Earth's magnetic field from ~1250 to ~720 BC, reaching a maximum VADM of ~125 ZAm
Ground-state properties of Ag/sub 2/: A local-density pseudopotential approach
Luis Martins, J.; Andreoni, W.
1983-12-01
The local-density approximation of the density-functional theory is applied to calculate the ground-state properties of Ag/sub 2/, within the framework of the pseudopotential method. The calculated values of the bond length and the harmonic vibrational frequency are in good agreement with experiment. The bonding properties are found to be influenced by the d-electron states in a significant way. The results are compared with those of configuration-interaction calculations.
Billoud, Bernard; Jouanno, Émilie; Nehr, Zofia; Carton, Baptiste; Rolland, Élodie; Chenivesse, Sabine; Charrier, Bénédicte
2015-01-01
Mutagenesis is the only process by which unpredicted biological gene function can be identified. Despite that several macroalgal developmental mutants have been generated, their causal mutation was never identified, because experimental conditions were not gathered at that time. Today, progresses in macroalgal genomics and judicious choices of suitable genetic models make mutated gene identification possible. This article presents a comparative study of two methods aiming at identifying a genetic locus in the brown alga Ectocarpus siliculosus: positional cloning and Next-Generation Sequencing (NGS)-based mapping. Once necessary preliminary experimental tools were gathered, we tested both analyses on an Ectocarpus morphogenetic mutant. We show how a narrower localization results from the combination of the two methods. Advantages and drawbacks of these two approaches as well as potential transfer to other macroalgae are discussed. PMID:25745426
Billoud, Bernard; Jouanno, Émilie; Nehr, Zofia; Carton, Baptiste; Rolland, Élodie; Chenivesse, Sabine; Charrier, Bénédicte
2015-01-01
Mutagenesis is the only process by which unpredicted biological gene function can be identified. Despite that several macroalgal developmental mutants have been generated, their causal mutation was never identified, because experimental conditions were not gathered at that time. Today, progresses in macroalgal genomics and judicious choices of suitable genetic models make mutated gene identification possible. This article presents a comparative study of two methods aiming at identifying a genetic locus in the brown alga Ectocarpus siliculosus: positional cloning and Next-Generation Sequencing (NGS)-based mapping. Once necessary preliminary experimental tools were gathered, we tested both analyses on an Ectocarpus morphogenetic mutant. We show how a narrower localization results from the combination of the two methods. Advantages and drawbacks of these two approaches as well as potential transfer to other macroalgae are discussed. PMID:25745426
The contour method: a new approach in experimental mechanics
Prime, Michael B
2009-01-01
The recently developed contour method can measure complex residual-stress maps in situations where other measurement methods cannot. This talk first describes the principle of the contour method. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contour of the resulting new surface, which will not be flat if residual stresses are relaxed by the cutting, is then measured. Finally, a conceptually simple finite element analysis determines the original residual stresses from the measured contour. Next, this talk gives several examples of applications. The method is validated by comparing with neutron diffraction measurements in an indented steel disk and in a friction stir weld between dissimilar aluminum alloys. Several applications are shown that demonstrate the power of the contour method: large aluminum forgings, railroad rails, and welds. Finally, this talk discusses why the contour method is significant departure from conventional experimental mechanics. Other relaxation method, for example hole-drilling, can only measure a 1-D profile of residual stresses, and yet they require a complicated inverse calculation to determine the stresses from the strain data. The contour method gives a 2-D stress map over a full cross-section, yet a direct calculation is all that is needed to reduce the data. The reason for these advantages lies in a subtle but fundamental departure from conventional experimental mechanics. Applying new technology to old methods like will not give similar advances, but the new approach also introduces new errors.
A Mixed Methods Approach to Network Data Collection
Rice, Eric; Holloway, Ian W.; Barman-Adhikari, Anamika; Fuentes, Dahlia; Brown, C. Hendricks; Palinkas, Lawrence A.
2013-01-01
There is a growing interest in examining network processes with a mix of qualitative and quantitative network data. Research has consistently shown that free recall name generators entail recall bias and result in missing data that affects the quality of social network data. This study describes a mixed methods approach for collecting social network data, combining a free recall name generator in the context of an online survey with network relations data coded from transcripts of semi-structured qualitative interviews. The combined network provides substantially more information about the network space, both quantitatively and qualitatively. While network density was relatively stable across networks generated from different data collection methodologies, there were noticeable differences in centrality and component structure across networks. The approach presented here involved limited participant burden and generated more complete data than either technique alone could provide. We make suggestions for further development of this method. PMID:25285047
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Finite Elements approach for Density Functional Theory calculations on locally refined meshes
Fattebert, J; Hornung, R D; Wissink, A M
2006-03-27
We present a quadratic Finite Elements approach to discretize the Kohn-Sham equations on structured non-uniform meshes. A multigrid FAC preconditioner is proposed to iteratively solve the equations by an accelerated steepest descent scheme. The method was implemented using SAMRAI, a parallel software infrastructure for general AMR applications. Examples of applications to small nanoclusters calculations are presented.
Finite Element approach for Density Functional Theory calculations on locally refined meshes
Fattebert, J; Hornung, R D; Wissink, A M
2007-02-23
We present a quadratic Finite Element approach to discretize the Kohn-Sham equations on structured non-uniform meshes. A multigrid FAC preconditioner is proposed to iteratively solve the equations by an accelerated steepest descent scheme. The method was implemented using SAMRAI, a parallel software infrastructure for general AMR applications. Examples of applications to small nanoclusters calculations are presented.
Efficient Approach To Discover Novel Agrochemical Candidates: Intermediate Derivatization Method.
Liu, Changling; Guan, Aiying; Yang, Jindong; Chai, Baoshan; Li, Miao; Li, Huichao; Yang, Jichun; Xie, Yong
2016-01-13
Intensive competition of intellectual property, easy development of agrochemical resistance, and stricter regulations of environmental concerns make the successful rate for agrochemical discovery extremely lower using traditional agrochemical discovery methods. Therefore, there is an urgent need to find a novel approach to guide agrochemical discovery with high efficiency to quickly keep pace with the changing market. On the basis of these situations, here we summarize the intermediate derivatization method (IDM) between conventional methods in agrochemicals and novel ones in pharmaceuticals. This method is relatively efficient with short time in discovery phase, reduced cost, especially good innovated structure, and better performance. In this paper, we summarize and illustrate "what is the IDM" and "why to use" and "how to use" it to accelerate the discovery of new biologically active molecules, focusing on agrochemicals. Furthermore, we display several research projects in our novel agrochemical discovery programs with improved success rate under guidance of this strategy in recent years. PMID:25517210
Selection of Construction Methods: A Knowledge-Based Approach
Skibniewski, Miroslaw
2013-01-01
The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925
Allaby, Robin G.; Gutaker, Rafal; Clarke, Andrew C.; Pearson, Neil; Ware, Roselyn; Palmer, Sarah A.; Kitchen, James L.; Smith, Oliver
2015-01-01
Our understanding of the evolution of domestication has changed radically in the past 10 years, from a relatively simplistic rapid origin scenario to a protracted complex process in which plants adapted to the human environment. The adaptation of plants continued as the human environment changed with the expansion of agriculture from its centres of origin. Using archaeogenomics and computational models, we can observe genome evolution directly and understand how plants adapted to the human environment and the regional conditions to which agriculture expanded. We have applied various archaeogenomics approaches as exemplars to study local adaptation of barley to drought resistance at Qasr Ibrim, Egypt. We show the utility of DNA capture, ancient RNA, methylation patterns and DNA from charred remains of archaeobotanical samples from low latitudes where preservation conditions restrict ancient DNA research to within a Holocene timescale. The genomic level of analyses that is now possible, and the complexity of the evolutionary process of local adaptation means that plant studies are set to move to the genome level, and account for the interaction of genes under selection in systems-level approaches. This way we can understand how plants adapted during the expansion of agriculture across many latitudes with rapidity. PMID:25487329
Allaby, Robin G; Gutaker, Rafal; Clarke, Andrew C; Pearson, Neil; Ware, Roselyn; Palmer, Sarah A; Kitchen, James L; Smith, Oliver
2015-01-19
Our understanding of the evolution of domestication has changed radically in the past 10 years, from a relatively simplistic rapid origin scenario to a protracted complex process in which plants adapted to the human environment. The adaptation of plants continued as the human environment changed with the expansion of agriculture from its centres of origin. Using archaeogenomics and computational models, we can observe genome evolution directly and understand how plants adapted to the human environment and the regional conditions to which agriculture expanded. We have applied various archaeogenomics approaches as exemplars to study local adaptation of barley to drought resistance at Qasr Ibrim, Egypt. We show the utility of DNA capture, ancient RNA, methylation patterns and DNA from charred remains of archaeobotanical samples from low latitudes where preservation conditions restrict ancient DNA research to within a Holocene timescale. The genomic level of analyses that is now possible, and the complexity of the evolutionary process of local adaptation means that plant studies are set to move to the genome level, and account for the interaction of genes under selection in systems-level approaches. This way we can understand how plants adapted during the expansion of agriculture across many latitudes with rapidity. PMID:25487329
A comparison of locally adaptive multigrid methods: LDC, FAC and FIC
NASA Technical Reports Server (NTRS)
Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul
1993-01-01
This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Ramanathan, R. K.
1977-01-01
A rational multilevel approach for minimum weight structural design of truss and wing structures including local and system buckling constraints is presented. Overall proportioning of the structure is achieved at the system level subject to strength, displacement and system buckling constraints, while the detailed component designs are carried out separately at the component level satisfying local buckling constraints. Total structural weight is taken to be the objective function at the system level while employing the change in the equivalent system stiffness of the component as the component level objective function. Finite element analysis is used to predict static response while system buckling behavior is handled by incorporating a geometric stiffness matrix capability. Buckling load factors and the corresponding mode shapes are obtained by solving the eigenvalue problem associated with the assembled elastic stiffness and geometric stiffness matrices for the structural system. At the component level various local buckling failure modes are guarded against using semi-empirical formulas. Mathematical programming techniques are employed at both the system and component level.
NASA Astrophysics Data System (ADS)
Kim, Sejoong; Marzari, Nicola
2013-06-01
We present a first-principles approach for inelastic quantum transport calculations based on maximally localized Wannier functions. Electronic-structure properties are obtained from density-functional theory in a plane-wave basis, and electron-vibration coupling strengths and vibrational properties are determined with density-functional perturbation theory. Vibration-induced inelastic transport properties are calculated with nonequilibrium Green's function techniques; since these are based on a localized orbital representation we use maximally localized Wannier functions. Our formalism is applied first to investigate inelastic transport in a benzene molecular junction connected to monoatomic carbon chains. In this benchmark system the electron-vibration self-energy is calculated either in the self-consistent Born approximation or by lowest-order perturbation theory. It is observed that upward and downward conductance steps occur, which can be understood using multieigenchannel scattering theory and symmetry conditions. In a second example, where the monoatomic carbon chain electrode is replaced with a (3,3) carbon nanotube, we focus on the nonequilibrium vibration populations driven by the conducting electrons using a semiclassical rate equation and highlight and discuss in detail the appearance of vibrational cooling as a function of bias and the importance of matching the vibrational density of states of the conductor and the leads to minimize joule heating and breakdown.
Partial fault dictionary: A new approach for computer-aided fault localization
Hunger, A.; Papathanasiou, A.
1995-12-31
The approach described in this paper has been developed to address the computation time and problem size of localization methodologies in VLSI circuits in order to speed up the overall time consumption for fault localization. The reduction of the problem to solve is combined with the idea of the fault dictionary. In a pre-processing phase, a possibly faulty area is derived using the netlist and the actual test results as input data. The result is a set of cones originating from each faulty primary output. In the next step, the best cone is extracted for the fault dictionary methodology according to a heuristic formula. The circuit nodes, which are included in the intersection of the cones, are combined to a fault list. This fault list together with the best cone can be used by the fault simulator to generate a small and manageable fault dictionary related to one faulty output. In connection with additional algorithms for the reduction of stimuli and netlist a partial fault dictionary can be set up. This dictionary is valid only for the given faulty device together with the given and reduced stimuli, but offers important benefits: Practical results show a reduction of simulation time and size of the fault dictionary by factors around 100 or even more, depending on the actual circuit and assumed fault. The list of fault candidates is significantly reduced, and the required number of steps during the process of localization is reduced, too.
Bogdanovska, Liljana; Poceva Panovska, Ana; Nakov, Natalija; Zafirova, Marija; Popovska, Mirjana; Dimitrovska, Aneta; Petkovska, Rumenka
2016-08-25
The aim of our study was application of chemometric algorithms for multivariate data analysis in efficacy assessment of the local periodontal treatment with doxycycline (DOX). Treatment efficacy was evaluated by monitoring inflammatory biomarkers in gingival crevicular fluid (GCF) samples and clinical indices before and after the local treatment as well as by determination of DOX concentration in GCF after the local treatment. The experimental values from these determinations were submitted to several chemometric algorithms: principal component analysis (PCA), partial least square discriminant analysis (PLS-DA) and orthogonal projection to latent structures-discriminant analysis (OPLS-DA). The data structure and the mutual relations of the selected variables were thoroughly investigated by PCA. The PLS-DA model identified variables responsible for discrimination of classes of data, before and after DOX treatment. The OPLS-DA model compared the efficacy of the two commonly used medications in periodontal treatment, chlorhexidine (CHX) and DOX, at the same time providing insight in their mechanism of action. The obtained results indicate that application of multivariate chemometric algorithms can be used as a valuable approach for assessment of treatment efficacy. PMID:27283484
ERIC Educational Resources Information Center
Phillips, William E.; Feng, Jay
2012-01-01
A quasi-experimental action research with a pretest-posttest same subject design was implemented to determine if there is a different effect of the flash card method and the multisensory approach on kindergarteners' achievement in sight word recognition, and which method is more effective if there is any difference. Instrumentation for pretest and…
ERIC Educational Resources Information Center
Conrad, Jack G.; Claussen, Joanne Smestad; Yang, Changwen
2002-01-01
Compares standard global information retrieval searching with more localized techniques to address the database selection problem that users often have when searching for the most relevant database, based on experiences with the Westlaw Directory. Findings indicate that a browse plus search approach in a hierarchical environment produces the most…
Retinal Vessel Segmentation: An Efficient Graph Cut Approach with Retinex and Local Phase
Zhao, Yitian; Liu, Yonghuai; Wu, Xiangqian; Harding, Simon P.; Zheng, Yalin
2015-01-01
Our application concerns the automated detection of vessels in retinal images to improve understanding of the disease mechanism, diagnosis and treatment of retinal and a number of systemic diseases. We propose a new framework for segmenting retinal vasculatures with much improved accuracy and efficiency. The proposed framework consists of three technical components: Retinex-based image inhomogeneity correction, local phase-based vessel enhancement and graph cut-based active contour segmentation. These procedures are applied in the following order. Underpinned by the Retinex theory, the inhomogeneity correction step aims to address challenges presented by the image intensity inhomogeneities, and the relatively low contrast of thin vessels compared to the background. The local phase enhancement technique is employed to enhance vessels for its superiority in preserving the vessel edges. The graph cut-based active contour method is used for its efficiency and effectiveness in segmenting the vessels from the enhanced images using the local phase filter. We have demonstrated its performance by applying it to four public retinal image datasets (3 datasets of color fundus photography and 1 of fluorescein angiography). Statistical analysis demonstrates that each component of the framework can provide the level of performance expected. The proposed framework is compared with widely used unsupervised and supervised methods, showing that the overall framework outperforms its competitors. For example, the achieved sensitivity (0:744), specificity (0:978) and accuracy (0:953) for the DRIVE dataset are very close to those of the manual annotations obtained by the second observer. PMID:25830353
Healy, R.W.; Russell, T.F.
1993-01-01
Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods for solute transport problems that are dominated by advection. FVELLAM systematically conserves mass globally with all types of boundary conditions. Integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking of characteristic lines intersecting inflow boundaries. FVELLAM extends previous results by obtaining mass conservation locally on Lagrangian space-time elements. -from Authors
The Local Discontinuous Galerkin Method for Time-Dependent Convection-Diffusion Systems
NASA Technical Reports Server (NTRS)
Cockburn, Bernardo; Shu, Chi-Wang
1997-01-01
In this paper, we study the Local Discontinuous Galerkin methods for nonlinear, time-dependent convection-diffusion systems. These methods are an extension of the Runge-Kutta Discontinuous Galerkin methods for purely hyperbolic systems to convection-diffusion systems and share with those methods their high parallelizability, their high-order formal accuracy, and their easy handling of complicated geometries, for convection dominated problems. It is proven that for scalar equations, the Local Discontinuous Galerkin methods are L(sup 2)-stable in the nonlinear case. Moreover, in the linear case, it is shown that if polynomials of degree k are used, the methods are k-th order accurate for general triangulations; although this order of convergence is suboptimal, it is sharp for the LDG methods. Preliminary numerical examples displaying the performance of the method are shown.
Analytical method transfer: new descriptive approach for acceptance criteria definition.
de Fontenay, Gérald
2008-01-01
Within the pharmaceutical industry, method transfers are now commonplace during the life cycle of an analytical method. Setting acceptance criteria for analytical transfers is, however, much more difficult than usually described. Criteria which are too wide may lead to the acceptance of a laboratory providing non-equivalent results, resulting in bad release/reject decisions for pharmaceutical products (a consumer risk). On the contrary, criteria which are too tight may lead to the rejection of an equivalent laboratory, resulting in time costs and delay in the transfer process (an industrial risk). The consumer risk has to be controlled first. But the risk does depend on the method capability (tolerance to method precision ratio). Analytical transfers were simulated for different scenarios (different method capabilities and transfer designs, 10,000 simulations per test). The results of the simulations showed that the method capability has a strong influence on the probability of success of its transfer. For the transfer design, the number of independent analytical runs to be performed on a same batch has much more influence than the number of replicates per run, especially when the inter-day variability of the method is high. A classic descriptive approach for analytical method transfer does not take into account the variability of the method, and therefore, no risks are controlled. Tools for designing analytical transfers and defining a new descriptive acceptance criterion, which take into account the intra- and inter-day variability of the method, are provided for a better risk evaluation by non-statisticians. PMID:17961955
The Feldenkrais Method: a dynamic approach to changing motor behavior.
Buchanan, P A; Ulrich, B D
2001-12-01
This tutorial describes the Feldenkrais Method and points to parallels with a dynamic systems theory (DST) approach to motor behavior Feldenkrais is an educational system designed to use movement and perception to foster individualized improvement in function. Moshe Feldenkrais, its originator, believed his method enhanced people's ability to discover flexible and adaptable behavior and that behaviors are self-organized. Similarly, DST explains that a human-environment system is continually adapting to changing conditions and assembling behaviors accordingly. Despite little research, Feldenkrais is being used with people of widely ranging ages and abilities in varied settings. We propose that DSTprovides an integrated foundation for research on the Feldenkrais Method, suggest research questions, and encourage researchers to test the fundamental tenets of Feldenkrais. PMID:11770781
NASA Astrophysics Data System (ADS)
Liu, Jiaqi; Han, Jing; Zhang, Yi; Bai, Lianfa
2015-10-01
Locally adaptive regression kernels model can describe the edge shape of images accurately and graphic trend of images integrally, but it did not consider images' color information while the color is an important element of an image. Therefore, we present a novel method of target recognition based on 3-D-color-space locally adaptive regression kernels model. Different from the general additional color information, this method directly calculate the local similarity features of 3-D data from the color image. The proposed method uses a few examples of an object as a query to detect generic objects with incompact, complex and changeable shapes. Our method involves three phases: First, calculating the novel color-space descriptors from the RGB color space of query image which measure the likeness of a voxel to its surroundings. Salient features which include spatial- dimensional and color -dimensional information are extracted from said descriptors, and simplifying them to construct a non-similar local structure feature set of the object class by principal components analysis (PCA). Second, we compare the salient features with analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. Then the similar structures in the target image are obtained using local similarity structure statistical matching. Finally, we use the method of non-maxima suppression in the similarity image to extract the object position and mark the object in the test image. Experimental results demonstrate that our approach is effective and accurate in improving the ability to identify targets.
Juratli, Tareq A; Schackert, Gabriele; Krex, Dietmar
2013-09-01
Malignant gliomas are the most frequently occurring, devastating primary brain tumors, and are coupled with a poor survival rate. Despite the fact that complete neurosurgical resection of these tumors is impossible in consideration of their infiltrating nature, surgical resection followed by adjuvant therapeutics, including radiation therapy and chemotherapy, is still the current standard therapy. Systemic chemotherapy is restricted by the blood-brain barrier, while methods of local delivery, such as with drug-impregnated wafers, convection-enhanced drug delivery, or direct perilesional injections, present attractive ways to circumvent these barriers. These methods are promising ways for direct delivery of either standard chemotherapeutic or new anti-cancer agents. Several clinical trials showed controversial results relating to the influence of a local delivery of chemotherapy on the survival of patients with both recurrent and newly diagnosed malignant gliomas. Our article will review the development of the drug-impregnated release, as well as convection-enhanced delivery and the direct injection into brain tissue, which has been used predominantly in gene-therapy trials. Further, it will focus on the use of convection-enhanced delivery in the treatment of patients with malignant gliomas, placing special emphasis on potential shortcomings in past clinical trials. Although there is a strong need for new or additional therapeutic strategies in the treatment of malignant gliomas, and although local delivery of chemotherapy in those tumors might be a powerful tool, local therapy is used only sporadically nowadays. Thus, we have to learn from our mistakes in the past and we strongly encourage future developments in this field. PMID:23694764
ERIC Educational Resources Information Center
Tabulawa, Richard
2011-01-01
Using a global-local dialectic approach, this paper traces the rise of the basic education programme in the 1980s and 1990s in Botswana and its subsequent attenuation in the 2000s. Amongst the local forces that led to the rise of BEP were Botswana's political project of nation-building; the country's dire human resources situation in the decades…
Localization of incipient tip vortex cavitation using ray based matched field inversion method
NASA Astrophysics Data System (ADS)
Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon
2015-10-01
Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.
Rapid OTAN method for localizing unsaturated lipids in lung tissue sections.
Negi, D S; Stephens, R J
1981-05-01
The OTAN treatment, which is the only histochemical method available at present for the simultaneous localization of hydrophobic and hydrophilic unsaturated lipids in tissue sections, requires unduly long exposure to OsO4 and use of free-floating sections, which makes handling the sections difficult and often results in their loss or damage. Simple modifications using OsO4 treatment at 37 C and slide-mounted sections eliminate the practical drawbacks of the existing method and provide as good or better localization in less than one-eight of the time. The modified method is applicable to fixed as well as fresh frozen tissues. PMID:7268814
A local fuzzy method based on “p-strong” community for detecting communities in networks
NASA Astrophysics Data System (ADS)
Yi, Shen; Gang, Ren; Yang, Liu; Jia-Li, Xu
2016-06-01
In this paper, we propose a local fuzzy method based on the idea of “p-strong” community to detect the disjoint and overlapping communities in networks. In the method, a refined agglomeration rule is designed for agglomerating nodes into local communities, and the overlapping nodes are detected based on the idea of making each community strong. We propose a contribution coefficient to measure the contribution of an overlapping node to each of its belonging communities, and the fuzzy coefficients of the overlapping node can be obtained by normalizing the to all its belonging communities. The running time of our method is analyzed and varies linearly with network size. We investigate our method on the computer-generated networks and real networks. The testing results indicate that the accuracy of our method in detecting disjoint communities is higher than those of the existing local methods and our method is efficient for detecting the overlapping nodes with fuzzy coefficients. Furthermore, the local optimizing scheme used in our method allows us to partly solve the resolution problem of the global modularity. Project supported by the National Natural Science Foundation of China (Grant Nos. 51278101 and 51578149), the Science and Technology Program of Ministry of Transport of China (Grant No. 2015318J33080), the Jiangsu Provincial Post-doctoral Science Foundation, China (Grant No. 1501046B), and the Fundamental Research Funds for the Central Universities, China (Grant No. Y0201500219).
A Local Discontinuous Galerkin Method for the Complex Modified KdV Equation
Li Wenting; Jiang Kun
2010-09-30
In this paper, we develop a local discontinuous Galerkin(LDG) method for solving complex modified KdV(CMKdV) equation. The LDG method has the flexibility for arbitrary h and p adaptivity. We prove the L{sup 2} stability for general solutions.
Remotely actuated localized pressure and heat apparatus and method of use
NASA Technical Reports Server (NTRS)
Merret, John B. (Inventor); Taylor, DeVor R. (Inventor); Wheeler, Mark M. (Inventor); Gale, Dan R. (Inventor)
2004-01-01
Apparatus and method for the use of a remotely actuated localized pressure and heat apparatus for the consolidation and curing of fiber elements in, structures. The apparatus includes members for clamping the desired portion of the fiber elements to be joined, pressure members and/or heat members. The method is directed to the application and use of the apparatus.
Local Analysis via the Real Space GREEN’S Function Method
NASA Astrophysics Data System (ADS)
Wu, Shi-Yu; Jayanthi, Chakram S.
A complete account of the development of the method of real space Green’s function is given in this review. The emphasis is placed on the calculation of the local Green’s function in a real space representation. The discussion is centered on a list of issues particularly relevant to the study of properties of complex systems with reduced symmetry.They include: (i) the convergence procedure for calculating the local Green’s function of infinite systems without any boundary effects associated with an arbitrary truncation of the system; (ii) a general recursive relation which streamlines the calculation of the local Green’s function; (iii) the calculation of the eigenvector of selected eigenvalues directly from the Green’s function. An example of the application of the method to carry out a local analysis of dynamics of the Au(511) surface is also presented.
Healy, R.W.; Russell, T.F.
1992-01-01
A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.
Mocerino, Carmela; Iannaci, Giuseppe; Sapere, Patrizia; Luise, Rossella; Canonico, Silvestro; Gambardella, Antonio
2016-09-01
Angiosarcomas are malignant tumors of endovascular origin, which may be divided into primary and secondary forms. Secondary breast angiosarcomas are an increasing problem, especially in patients treated with breast-conserving surgery followed by radiotherapy.We report a case of radiation-induced angiosarcoma of the breast in a 77-year-old woman who presented with a suspect lesion in her left breast. Excisional biopsy and subsequent immunohistochemical staining of the specimen was performed. Histological report was diagnostic for low-intermediate grade angiosarcoma. The tumor cells were diffusely positive for CD31 and CD34. We performed surgical resection with mastectomy.A multidisciplinary approach with bleomycin-based electrochemotherapy, radiation treatment, and chemotherapy with pegylated liposomal doxorubicin has been most useful to control subsequent local relapses. To date, the patient is under close observation and is performing well. No recurrence has been demonstrated after ending of chemotherapy. PMID:26872968
NASA Astrophysics Data System (ADS)
Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.
2016-04-01
Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.
NASA Astrophysics Data System (ADS)
Asimov, M. M.; Asimov, R. M.; Rubinov, A. N.
2007-06-01
New approach in laser-optical diagnostic methods of cell metabolism based on visualization the local net of tissue blood vessels is proposed. Optical model of laser - tissue interaction and algorithm of mathematical calculation of optical signals is developed. Novel technology of local tissue hypoxia elimination based on laser-induced photodissosiation of oxyhemoglobin in cutaneous blood vessels is developed. Method of determination of oxygen diffusion coefficient into tissue on the base of kinetics of tissue oxygenation TcPO II under the laser irradiation is proposed. The results of mathematical modeling the kinetic of oxygen distribution into tissue from arterial blood are presented. The possibility of calculation and determination of the level of TcPO II in zones with the disturbed blood microcirculation is demonstrated. The increase of the value of oxygen release rate more than for times under the irradiation by laser light is obtained. It is shown that the efficiency of laser-induced oxygenation by means of increasing oxygen concentration in blood plasma is comparable with the method of hyperbaric oxygenation (HBO) at the same time gaining advantages in local action. Different biomedical applications of developing method are discussed.
Local adaptive approach toward segmentation of microscopic images of activated sludge flocs
NASA Astrophysics Data System (ADS)
Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Lo, Po Kim; Yap, Vooi Voon
2015-11-01
Activated sludge process is a widely used method to treat domestic and industrial effluents. The conditions of activated sludge wastewater treatment plant (AS-WWTP) are related to the morphological properties of flocs (microbial aggregates) and filaments, and are required to be monitored for normal operation of the plant. Image processing and analysis is a potential time-efficient monitoring tool for AS-WWTPs. Local adaptive segmentation algorithms are proposed for bright-field microscopic images of activated sludge flocs. Two basic modules are suggested for Otsu thresholding-based local adaptive algorithms with irregular illumination compensation. The performance of the algorithms has been compared with state-of-the-art local adaptive algorithms of Sauvola, Bradley, Feng, and c-mean. The comparisons are done using a number of region- and nonregion-based metrics at different microscopic magnifications and quantification of flocs. The performance metrics show that the proposed algorithms performed better and, in some cases, were comparable to the state-of the-art algorithms. The performance metrics were also assessed subjectively for their suitability for segmentations of activated sludge images. The region-based metrics such as false negative ratio, sensitivity, and negative predictive value gave inconsistent results as compared to other segmentation assessment metrics.
First-Principles Theory of Momentum Dependent Local Ansatz Approach to Correlated Electron System
NASA Astrophysics Data System (ADS)
Chandra, Sumal; Kakehashi, Yoshiro
2016-06-01
We have extended the momentum-dependent local-ansatz (MLA) wavefunction method to the first-principles version using the tight-binding LDA+U Hamiltonian for the description of correlated electrons in the real system. The MLA reduces to the Rayleigh-Schrödinger perturbation theory in the weak correlation limit, and describes quantitatively the ground state and related low-energy excitations in solids. The theory has been applied to the paramagnetic Fe. The role of electron correlations on the energy, charge fluctuations, amplitude of local moment, momentum distribution functions, as well as the mass enhancement factor in Fe has been examined as a function of Coulomb interaction strength. It is shown that the inter-orbital charge-charge correlations between d electrons make a significant contribution to the correlation energy and charge fluctuations, while the intra-orbital and inter-orbital spin-spin correlations make a dominant contribution to the amplitude of local moment and the mass enhancement in Fe. Calculated partial mass enhancements are found to be 1.01, 1.01, and 3.33 for s, p, and d electrons, respectively. The averaged mass enhancement 1.65 is shown to be consistent with the experimental data as well as the recent results of theoretical calculations.
Multiple Dipole Sources Localization from the Scalp EEG Using a High-resolution Subspace Approach.
Ding, Lei; He, Bin
2005-01-01
We have developed a new algorithm, FINE, to enhance the spatial resolution and localization accuracy for closely-spaced sources, in the framework of the subspace source localization. Computer simulations were conducted in the present study to evaluate the performance of FINE, as compared with classic subspace source localization algorithms, i.e. MUSIC and RAP-MUSIC, in a realistic geometry head model by means of boundary element method (BEM). The results show that FINE could distinguish superficial simulated sources, with distance as low as 8.5 mm and deep simulated sources, with distance as low as 16.3 mm. Our results also show that the accuracy of source orientation estimates from FINE is better than MUSIC and RAP-MUSIC for closely-spaced sources. Motor potentials, obtained during finger movements in a human subject, were analyzed using FINE. The detailed neural activity distribution within the contralateral premotor areas and supplemental motor areas (SMA) is revealed by FINE as compared with MUSIC. The present study suggests that FINE has excellent spatial resolution in imaging neural sources. PMID:17282374
An Approach to Estimate the Localized Effects of an Aircraft Crash on a Facility
Kimura, C; Sanzo, D; Sharirli, M
2004-04-19
Aircraft crashes are an element of external events required to be analyzed and documented in facility Safety Analysis Reports (SARs) and Nuclear Explosive Safety Studies (NESSs). This paper discusses the localized effects of an aircraft crash impact into the Device Assembly Facility (DAF) located at the Nevada Test Site (NTS), given that the aircraft hits the facility. This was done to gain insight into the robustness of the DAF and to account for the special features of the DAF that enhance its ability to absorb the effects of an aircraft crash. For the purpose of this paper, localized effects are considered to be only perforation or scabbing of the facility. This paper presents an extension to the aircraft crash risk methodology of Department of Energy (DOE) Standard 3014. This extension applies to facilities that may find it necessary or desirable to estimate the localized effects of an aircraft crash hit on a facility of nonuniform construction or one that is shielded in certain directions by surrounding terrain or buildings. This extension is not proposed as a replacement to the aircraft crash risk methodology of DOE Standard 3014 but rather as an alternate method to cover situations that were not considered.
A formative multi-method approach to evaluating training.
Hayes, Holly; Scott, Victoria; Abraczinskas, Michelle; Scaccia, Jonathan; Stout, Soma; Wandersman, Abraham
2016-10-01
This article describes how we used a formative multi-method evaluation approach to gather real-time information about the processes of a complex, multi-day training with 24 community coalitions in the United States. The evaluation team used seven distinct, evaluation strategies to obtain evaluation data from the first Community Health Improvement Leadership Academy (CHILA) within a three-prong framework (inquiry, observation, and reflection). These methods included: comprehensive survey, rapid feedback form, learning wall, observational form, team debrief, social network analysis and critical moments reflection. The seven distinct methods allowed for both real time quality improvement during the CHILA and long term planning for the next CHILA. The methods also gave a comprehensive picture of the CHILA, which when synthesized allowed the evaluation team to assess the effectiveness of a training designed to tap into natural community strengths and accelerate health improvement. We hope that these formative evaluation methods can continue to be refined and used by others to evaluate training. PMID:27454882
NASA Astrophysics Data System (ADS)
Piotrowski, Adam P.; Napiorkowski, Jarosław J.
2011-09-01
SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other
NASA Astrophysics Data System (ADS)
Barbarella, E.; Allix, O.; Daghia, F.; Lamon, J.; Jollivet, T.
2016-06-01
Compressive tests involving buckling are known to be defect sensitive, nevertheless, to our knowledge, no inverse approach has been proposed yet to use this property for the localization and characterization of material defects. This is due to geometric imperfections, which greatly influence and even dominate the response of defective parts under compression. In comparison with a system lacking geometric imperfections, the modified system does not present any bifurcation, showing that the non-linear progressive response is mainly governed by such imperfections. Before implementing any inverse procedures it is necessary to know whether extracting meaningful material defect information from compressive tests on specimen which also have geometric imperfections is possible. To tackle this issue, an equivalent eigenvalue problem will be extracted from the non-linear response, a problem corrected from geometric imperfections. A dedicated inverse formulation based on the modified constitutive relation error will then be constructed which will involve only well-posed linear problems. Examples illustrate the potential of the methodology to localize and identify single and multiple defects.
MacDonald, Laura; Baldini, Giulia; Storrie, Brian
2015-01-01
Summary Conventional microscopy techniques, namely the confocal microscope or deconvolution processes, are resolution limited, ~250 nm, by the diffraction properties of light as developed by Ernst Abbe in 1873. This diffraction limit is appreciably above the size of most multi-protein complexes, which are typically 20–50 nm in diameter. In the mid 2000s, biophysicists moved beyond the diffraction barrier by structuring the illumination pattern and then applying mathematical principles and algorithms to allow a resolution of approximately 100 nm, sufficient to address protein subcellular colocalization questions. This “breaking” of the diffraction barrier, affording resolution beyond 200 nm is termed super resolution microscopy. More recent approaches include single molecule localization (such as PhotoActivated Localization Microscopy (PALM)/STochastic Optical Reconstruction Microscopy (STORM)) and point spread function engineering (such as STimulated Emission Depletion (STED) microscopy). In this review, we explain basic principles behind currently commercialized super resolution setups and address advantages and considerations in applying these techniques to protein colocalization in biological systems. PMID:25702123
Compression-RSA: New approach of encryption and decryption method
NASA Astrophysics Data System (ADS)
Hung, Chang Ee; Mandangan, Arif
2013-04-01
Rivest-Shamir-Adleman (RSA) cryptosystem is a well known asymmetric cryptosystem and it has been applied in a very wide area. Many researches with different approaches have been carried out in order to improve the security and performance of RSA cryptosystem. The enhancement of the performance of RSA cryptosystem is our main interest. In this paper, we propose a new method to increase the efficiency of RSA by shortening the number of plaintext before it goes under encryption process without affecting the original content of the plaintext. Concept of simple Continued Fraction and the new special relationship between it and Euclidean Algorithm have been applied on this newly proposed method. By reducing the number of plaintext-ciphertext, the encryption-decryption processes of a secret message can be accelerated.
NASA Technical Reports Server (NTRS)
Yan, Jue; Shu, Chi-Wang; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
In this paper we review the existing and develop new continuous Galerkin methods for solving time dependent partial differential equations with higher order derivatives in one and multiple space dimensions. We review local discontinuous Galerkin methods for convection diffusion equations involving second derivatives and for KdV type equations involving third derivatives. We then develop new local discontinuous Galerkin methods for the time dependent bi-harmonic type equations involving fourth derivatives, and partial differential equations involving fifth derivatives. For these new methods we present correct interface numerical fluxes and prove L(exp 2) stability for general nonlinear problems. Preliminary numerical examples are shown to illustrate these methods. Finally, we present new results on a post-processing technique, originally designed for methods with good negative-order error estimates, on the local discontinuous Galerkin methods applied to equations with higher derivatives. Numerical experiments show that this technique works as well for the new higher derivative cases, in effectively doubling the rate of convergence with negligible additional computational cost, for linear as well as some nonlinear problems, with a local uniform mesh.
An iterative method for the localization of a neutron source in a large box (container)
NASA Astrophysics Data System (ADS)
Dubinski, S.; Presler, O.; Alfassi, Z. B.
2007-12-01
The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.
A fictitious domain approach for the Stokes problem based on the extended finite element method
NASA Astrophysics Data System (ADS)
Court, Sébastien; Fournié, Michel; Lozinski, Alexei
2014-01-01
In the present work, we propose to extend to the Stokes problem a fictitious domain approach inspired by eXtended Finite Element Method and studied for Poisson problem in [Renard]. The method allows computations in domains whose boundaries do not match. A mixed finite element method is used for fluid flow. The interface between the fluid and the structure is localized by a level-set function. Dirichlet boundary conditions are taken into account using Lagrange multiplier. A stabilization term is introduced to improve the approximation of the normal trace of the Cauchy stress tensor at the interface and avoid the inf-sup condition between the spaces for velocity and the Lagrange multiplier. Convergence analysis is given and several numerical tests are performed to illustrate the capabilities of the method.
NASA Astrophysics Data System (ADS)
Lu, Y. G.; Zhang, X. P.; Dong, Y. M.; Wang, F.; Liu, Y. H.
2007-07-01
A novel optical cable fault location method, which is based on Brillouin optical time domain reflectometer (BOTDR) and cable localized heating, is proposed and demonstrated. In the method, a BOTDR apparatus is used to measure the optical loss and strain distribution along the fiber in an optical cable, and a heating device is used to heat the cable at its certain local site. Actual experimental results make it clear that the proposed method works effectively without complicated calculation. By means of the new method, we have successfully located the optical cable fault in the 60 km optical fiber composite power cable from Shanghai to Shengshi, Zhejiang. A fault location accuracy of 1 meter was achieved. The fault location uncertainty of the new optical cable fault location method is at least one order of magnitude smaller than that of the traditional OTDR method.
Strain localization in shear zones during exhumation: a graphical approach to facies interpretation
NASA Astrophysics Data System (ADS)
Cardello, Giovanni Luca; Augier, Romain; Laurent, Valentin; Roche, Vincent; Jolivet, Laurent
2015-04-01
Strain localization is a fundamental process determining plate tectonics. It is expressed in the ductile field by shear zones where strain concentrates. Despite their worldwide distribution in most metamorphic units, their detailed characterization and processes comprehension are far to be fully addressed. In this work, a graphic approach to tectono-metamorphic facies identification is applied to the Delfini Shear Zone in Syros (Cyclades, Greece), which is mostly characterized by metabasites displaying different degree of retrogression from fresh eclogite to prasinite. Several exhumation mechanisms brought them from the depths of the subduction zone to the surface, from syn-orogenic exhumation to post-orogenic backarc extension. Boudinage, grain-size reduction and metamorphic reactions determinate strain localization across well-deformed volumes of rocks organized in a hierarchic frame of smaller individual shear zones (10-25 meters thick). The most representative of them can be subdivided in 5 tectono-metamorphic (Tm) facies, TmA to E. TmA records HP witnesses and older folding stages preserved within large boudins as large as 1-2 m across. TmB is characterized by much smaller and progressively more asymmetric boudins and sigmoids. TmC is defined by well-transposed sub- to plane-parallel blueschist textures crossed by chlorite-shear bands bounding the newly formed boudins. When strain increases (facies TmD-E), the texture is progressively retrograded to LP-HT greenschist-facies conditions. Those observations allowed us to establish a sequence of stages of strain localization. The first stage (1) is determined by quite symmetric folding and boudinage. In a second stage (2), grain-size reduction is associated with dense shear bands formation along previously formed glaucophane and quartz-rich veins. With progressively more localized strain, mode-I veins may arrange as tension gashes that gradually evolve to blueschist shear bands. This process determinates the
Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei
2011-07-15
Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not
Classical convergence versus Zipf rank approach: Evidence from China's local-level data
NASA Astrophysics Data System (ADS)
Tang, Pan; Zhang, Ying; Baaquie, Belal E.; Podobnik, Boris
2016-02-01
This paper applies Zipf rank approach to measure how long it will take for the individual economy to reach the final state of equilibrium by using local-level data of China's urban areas. The indicators, the gross domestic product (GDP) per capita and the market capitalization (MCAP) per capita of 150 major cities in China are used for analyzing their convergence. Besides, the power law relationship is examined for GDP and MCAP. Our findings show that, compared to the classical approaches: β-convergence and σ-convergence, the Zipf ranking predicts that, in approximately 16 years, all the major cities in China will reach comparable values of GDP per capita. However, the MCAP per capita tends to follow the periodic fluctuation of the economic cycle, while the mean-log derivation (MLD) confirms the results of our study. Moreover, GDP per capita and MCAP per capita follow a power law with an average value of α = 0.41 which is higher than α = 0.38 obtained based on a large number of countries around the world.
63,65Cu NMR Method in a Local Field for Investigation of Copper Ore Concentrates
NASA Astrophysics Data System (ADS)
Gavrilenko, A. N.; Starykh, R. V.; Khabibullin, I. Kh.; Matukhin, V. L.
2015-01-01
To choose the most efficient method and ore beneficiation flow diagram, it is important to know physical and chemical properties of ore concentrates. The feasibility of application of the 63,65Cu nuclear magnetic resonance (NMR) method in a local field aimed at studying the properties of copper ore concentrates in the copper-iron-sulfur system is demonstrated. 63,65Cu NMR spectrum is measured in a local field for a copper concentrate sample and relaxation parameters (times T1 and T2) are obtained. The spectrum obtained was used to identify a mineral (chalcopyrite) contained in the concentrate. Based on the experimental data, comparative characteristics of natural chalcopyrite and beneficiated copper concentrate are given. The feasibility of application of the NMR method in a local field to explore mineral deposits is analyzed.
A Localized Meshless Approach for Modeling Spatial-temporal Calcium Dynamics in Ventricular Myocytes
Yao, Guangming; Yu, Zeyun
2011-01-01
SUMMARY Spatial-temporal calcium dynamics due to calcium release, buffering and re-uptaking plays a central role in studying excitation-contraction (E-C) coupling in both normal and diseased cardiac myocytes. In this paper, we employ a meshless method, namely, the local radial basis function collocation method (LRBFCM) to model such calcium behaviors by solving a nonlinear system of reaction-diffusion partial differential equations. In particular, a simplified structural unit containing a single transverse-tubule (or t-tubule) and its surrounding half sarcomeres is investigated using the meshless method. Numerical results are compared to those generated by finite element methods, showing the capability and efficiency of the LRBFCM in modeling calcium dynamics in ventricular myocytes. The single t-tubule model is also extended to the whole-cell scale with t-tubules excluded to demonstrate the scalability of the proposed meshless method in handling very large domains. The experiments have shown that the LRBFCM is suitable to multi-scale modeling of calcium dynamics in ventricular myocytes with high accuracy and efficiency. PMID:22408720
A new method for matched field localization based on two-hydrophone
NASA Astrophysics Data System (ADS)
Li, Kun; Fang, Shi-liang
2015-03-01
The conventional matched field processing (MFP) uses large vertical arrays to locate an underwater acoustic target. However, the use of large vertical arrays increases equipment and computational cost, and causes some problems such as element failures, and array tilting to degrade the localization performance. In this paper, the matched field localization method using two-hydrophone is proposed for underwater acoustic pulse signals with an unknown emitted signal waveform. Using the received signal of hydrophones and the ocean channel pulse response which can be calculated from an acoustic propagation model, the spectral matrix of the emitted signal for different source locations can be estimated by employing the method of frequency domain least squares. The resulting spectral matrix of the emitted signal for every grid region is then multiplied by the ocean channel frequency response matrix to generate the spectral matrix of replica signal. Finally, the matched field localization using two-hydrophone for underwater acoustic pulse signals of an unknown emitted signal waveform can be estimated by comparing the difference between the spectral matrixes of the received signal and the replica signal. The simulated results from a shallow water environment for broadband signals demonstrate the significant localization performance of the proposed method. In addition, the localization accuracy in five different cases are analyzed by the simulation trial, and the results show that the proposed method has a sharp peak and low sidelobes, overcoming the problem of high sidelobes in the conventional MFP due to lack of the number of elements.
A Method for Non-Rigid Face Alignment via Combining Local and Holistic Matching
Yang, Yang; Chen, Zhuo
2016-01-01
We propose a method for non-rigid face alignment which only needs a single template, such as using a person’s smile face to match his surprise face. First, in order to be robust to outliers caused by complex geometric deformations, a new local feature matching method called K Patch Pairs (K-PP) is proposed. Specifically, inspired by the state-of-art similarity measure used in template matching, K-PP is to find the mutual K nearest neighbors between two images. A weight matrix is then presented to balance the similarity and the number of local matching. Second, we proposed a modified Lucas-Kanade algorithm combined with local matching constraint to solve the non-rigid face alignment, so that a holistic face representation and local features can be jointly modeled in the object function. Both the flexible ability of local matching and the robust ability of holistic fitting are included in our method. Furthermore, we show that the optimization problem can be efficiently solved by the inverse compositional algorithm. Comparison results with conventional methods demonstrate our superiority in terms of both accuracy and robustness. PMID:27494319
NASA Astrophysics Data System (ADS)
Shin, Seungwon; Yoon, Ikroh; Juric, Damir
2011-07-01
We present a new interface reconstruction technique, the Local Front Reconstruction Method (LFRM), for incompressible multiphase flows. This new method falls in the category of Front Tracking methods but it shares automatic topology handling characteristics of the previously proposed Level Contour Reconstruction Method (LCRM). The LFRM tracks the phase interface explicitly as in Front Tracking but there is no logical connectivity between interface elements thus greatly easing the algorithmic complexity. Topological changes such as interfacial merging or pinch off are dealt with automatically and naturally as in the Level Contour Reconstruction Method. Here the method is described for both two- and three-dimensional flow geometries. The interfacial reconstruction technique in the LFRM differs from that in the LCRM formulation by foregoing using an Eulerian distance field function. Instead, the LFRM uses information from the original interface elements directly to generate the new interface in a mass conservative way thus showing significantly improved local mass conservation. Because the reconstruction procedure is independently carried out in each individual reconstruction cell after an initial localization process, an adaptive reconstruction procedure can be easily implemented to increase the accuracy while at the same time significantly decreasing the computational time required to perform the reconstruction. Several benchmarking tests are performed to validate the improved accuracy and computational efficiency as compared to the LCRM. The results demonstrate superior performance of the LFRM in maintaining detailed interfacial shapes and good local mass conservation especially when using low-resolution Eulerian grids.
Improved Local Wavenumber Methods in the Interpretation of Potential Field Data
NASA Astrophysics Data System (ADS)
Ma, Guoqing
2013-04-01
We present two new potential-inversion methods for estimating the depth and the nature (structural index) of the source, which use various combinations of different forms of local wavenumbers and the information about the horizontal location to estimate individually the depth and the nature of a magnetic source. The improved local wavenumber methods only use the horizontal offset and vertical offset of local wavenumbers to estimate the depth and the structural index of the source, so they yield more stable results compared with the results obtained by current methods that require the derivatives of local wavenumbers. Tests conducted with synthetic noise-free and noise-corrupted magnetic data show that the proposed methods can successfully estimate the depth and the nature of the geologic body. However, our methods are sensitive to high-wavenumber noise present in the data, and we reduced the noise effect by upward continuing the noise-corrupted magnetic data. The practical application of the new methods is tested on a real magnetic anomaly over a dike whose source parameters are known and the inversion results are consistent with the true values.
NASA Astrophysics Data System (ADS)
Uhl, Dieter; Bruch, Angela A.; Traiser, Christopher; Klotz, Stefan
2006-11-01
We present a detailed palaeoclimate analysis of the Middle Miocene (uppermost Badenian lowermost Sarmatian) Schrotzburg locality in S Germany, based on the fossil macro- and micro-flora, using four different methods for the estimation of palaeoclimate parameters: the coexistence approach (CA), leaf margin analysis (LMA), the Climate-Leaf Analysis Multivariate Program (CLAMP), as well as a recently developed multivariate leaf physiognomic approach based on an European calibration dataset (ELPA). Considering results of all methods used, the following palaeoclimate estimates seem to be most likely: mean annual temperature ˜15 16°C (MAT), coldest month mean temperature ˜7°C (CMMT), warmest month mean temperature between 25 and 26°C, and mean annual precipiation ˜1,300 mm, although CMMT values may have been colder as indicated by the disappearance of the crocodile Diplocynodon and the temperature thresholds derived from modern alligators. For most palaeoclimatic parameters, estimates derived by CLAMP significantly differ from those derived by most other methods. With respect to the consistency of the results obtained by CA, LMA and ELPA, it is suggested that for the Schrotzburg locality CLAMP is probably less reliable than most other methods. A possible explanation may be attributed to the correlation between leaf physiognomy and climate as represented by the CLAMP calibration data set which is largely based on extant floras from N America and E Asia and which may be not suitable for application to the European Neogene. All physiognomic methods used here were affected by taphonomic biasses. Especially the number of taxa had a great influence on the reliability of the palaeoclimate estimates. Both multivariate leaf physiognomic approaches are less influenced by such biasses than the univariate LMA. In combination with previously published results from the European and Asian Neogene, our data suggest that during the Neogene in Eurasia CLAMP may produce temperature
Ab initio methods for nuclear properties - a computational physics approach
NASA Astrophysics Data System (ADS)
Maris, Pieter
2011-04-01
A microscopic theory for the structure and reactions of light nuclei poses formidable challenges for high-performance computing. Several ab-initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab-initio no-core full configuration (NCFC) approach is based on basis space expansion methods and uses Slater determinants of single-nucleon basis functions to express the nuclear wave function. In this approach, the quantum many-particle problem becomes a large sparse matrix eigenvalue problem. The eigenvalues of this matrix give us the binding energies, and the corresponding eigenvectors the nuclear wave functions. These wave functions can be employed to evaluate experimental quantities. In order to reach numerical convergence for fundamental problems of interest, the matrix dimension often exceeds 1 billion, and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. I discuss different strategies for distributing and solving this large sparse matrix on current multicore computer architectures, including methods to deal with with memory bottleneck. Several of these strategies have been implemented in the code MFDn, which is a parallel fortran code for nuclear structure calculations. I will show scaling behavior and compare the performance of the pure MPI version with the hybrid MPI/OpenMP code on Cray XT4 and XT5 platforms. For large core counts (typically 5,000 and above), the hybrid version is more efficient than pure MPI. With this code, we have been able to predict properties of the unstable nucleus 14F, which have since been confirmed by experiments. I will also give an overview of other recent results for nuclei in the A = 6 to 16 range with 2- and 3-body interactions. Supported in part by US DOE Grant DE-FC02-09ER41582.
Partially Strong Transparency Conditions and a Singular Localization Method In Geometric Optics
NASA Astrophysics Data System (ADS)
Lu, Yong; Zhang, Zhifei
2016-03-01
This paper focuses on the stability analysis of WKB approximate solutions in geometric optics with the absence of strong transparency conditions under the terminology of Joly, Métivier and Rauch. We introduce a compatible condition and a singular localization method which allows us to prove the stability of WKB solutions over long time intervals. This compatible condition is weaker than the strong transparency condition. The singular localization method allows us to do delicate analysis near resonances. As an application, we show the long time approximation of Klein-Gordon equations by Schrödinger equations in the non-relativistic limit regime.
Vibrational excitations of arsine in the framework of a local unitary group approach
NASA Astrophysics Data System (ADS)
Sánchez-Castellanos, M.; Álvarez-Bajo, O.; Amezcua-Eccius, C. A.; Lemus, R.
2006-11-01
A description of vibrational excitations of pyramidal molecules in terms of the unitary group approach U( ν + 1) is presented. Based on the recent reformulation of this algebraic method the Hamiltonian is first expressed in the space of coordinates and momenta and thereafter translated into an algebraic realization in terms of the generators of the dynamical group Us(4) × Ub(4), where s and b stand for stretching and bending degrees of freedom, respectively. Fermi and number interactions are considered in the stretching-bending contribution of the Hamiltonian. This new approach provides in natural form the connection between the spectroscopic parameters and force constants. The analysis of the vibrational excitations of arsine is presented.
Hybrid Genetic Algorithm - Local Search Method for Ground-Water Management
NASA Astrophysics Data System (ADS)
Chiu, Y.; Nishikawa, T.; Martin, P.
2008-12-01
Ground-water management problems commonly are formulated as a mixed-integer, non-linear programming problem (MINLP). Relying only on conventional gradient-search methods to solve the management problem is computationally fast; however, the methods may become trapped in a local optimum. Global-optimization schemes can identify the global optimum, but the convergence is very slow when the optimal solution approaches the global optimum. In this study, we developed a hybrid optimization scheme, which includes a genetic algorithm and a gradient-search method, to solve the MINLP. The genetic algorithm identifies a near- optimal solution, and the gradient search uses the near optimum to identify the global optimum. Our methodology is applied to a conjunctive-use project in the Warren ground-water basin, California. Hi- Desert Water District (HDWD), the primary water-manager in the basin, plans to construct a wastewater treatment plant to reduce future septic-tank effluent from reaching the ground-water system. The treated wastewater instead will recharge the ground-water basin via percolation ponds as part of a larger conjunctive-use strategy, subject to State regulations (e.g. minimum distances and travel times). HDWD wishes to identify the least-cost conjunctive-use strategies that control ground-water levels, meet regulations, and identify new production-well locations. As formulated, the MINLP objective is to minimize water-delivery costs subject to constraints including pump capacities, available recharge water, water-supply demand, water-level constraints, and potential new-well locations. The methodology was demonstrated by an enumerative search of the entire feasible solution and comparing the optimum solution with results from the branch-and-bound algorithm. The results also indicate that the hybrid method identifies the global optimum within an affordable computation time. Sensitivity analyses, which include testing different recharge-rate scenarios, pond
Adaptive non-local means method for speckle reduction in ultrasound images
NASA Astrophysics Data System (ADS)
Ai, Ling; Ding, Mingyue; Zhang, Xuming
2016-03-01
Noise removal is a crucial step to enhance the quality of ultrasound images. However, some existing despeckling methods cannot ensure satisfactory restoration performance. In this paper, an adaptive non-local means (ANLM) filter is proposed for speckle noise reduction in ultrasound images. The distinctive property of the proposed method lies in that the decay parameter will not take the fixed value for the whole image but adapt itself to the variation of the local features in the ultrasound images. In the proposed method, the pre-filtered image will be obtained using the traditional NLM method. Based on the pre-filtered result, the local gradient will be computed and it will be utilized to determine the decay parameter adaptively for each image pixel. The final restored image will be produced by the ANLM method using the obtained decay parameters. Simulations on the synthetic image show that the proposed method can deliver sufficient speckle reduction while preserving image details very well and it outperforms the state-of-the-art despeckling filters in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Experiments on the clinical ultrasound image further demonstrate the practicality and advantage of the proposed method over the compared filtering methods.
An hybrid computing approach to accelerating the multiple scattering theory based ab initio methods
NASA Astrophysics Data System (ADS)
Wang, Yang; Stocks, G. Malcolm
2014-03-01
The multiple scattering theory method, also known as the Korringa-Kohn-Rostoker (KKR) method, is considered an elegant approach to the ab initio electronic structure calculation for solids. Its convenience in accessing the one-electron Green function has led to the development of locally-self consistent multiple scattering (LSMS) method, a linear scaling ab initio method that allows for the electronic structure calculation for complex structures requiring tens of thousands of atoms in unit cell. It is one of the few applications that demonstrated petascale computing capability. In this presentation, we discuss our recent efforts in developing a hybrid computing approach for accelerating the full potential electronic structure calculation. Specifically, in the framework of our existing LSMS code in FORTRAN 90/95, we explore the many core resources on GPGPU accelerators by implementing the compute intensive functions (for the calculation of multiple scattering matrices and the single site solutions) in CUDA, and move the computational tasks to the GPGPUs if they are found available. We explain in details our approach to the CUDA programming and the code structure, and show the speed-up of the new hybrid code by comparing its performances on CPU/GPGPU and on CPU only. The work was supported in part by the Center for Defect Physics, a DOE-BES Energy Frontier Research Center.
Ziomkiewicz, Iwona; Sporring, Jon; Pomorski, Thomas Günther; Schulz, Alexander
2015-09-01
Many membrane proteins are not evenly distributed over the plasma membrane, but gathered in domains assumed to have a particular lipid composition. Using single molecule localization microscopy (SMLM) we have immunolocalized a glycosylphosphatidylinositol (GPI)-anchor protein that labels nanodomains in a specialized plant cell type, and compared the suitability of three methods to estimate their size. As conventional methods full width at half maximum (FWHM) and the full diameter (FWMin) of domains were used. A boundary detection method of the domain area (DA) was performed in order to take irregular shapes into account. In order to compare the influence of the chosen measurement methods, we have developed a MatLab program that allows for automated analysis of domain sizes from multiple SMLM images and provides the statistics of three key features of domains: FWHM and FWMin along their long and short axes as well as the DA, derived from the molecular density. Domains formed by the GPI-anchor protein are approximating elliptical shapes. Direct and indirect immunolabeling resulted in a statistically significant difference in apparent domain size, reflecting the fact that the secondary antibody molecules extend the uncertainty along the nanodomain border. FWMin values along the long and short axis give good estimates of regular, geometrically centred domain shapes, while the DA value matches regular as well as irregular shapes best, as derived from computer-generated, irregular point clusters. PMID:26109552
Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey
NASA Astrophysics Data System (ADS)
Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.
2016-06-01
Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.
Modified patch-based locally optimal Wiener method for interferometric SAR phase filtering
NASA Astrophysics Data System (ADS)
Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing
2016-04-01
This paper presents a modified patch-based locally optimal Wiener (PLOW) method for interferometric synthetic aperture radar (InSAR) phase filtering. PLOW is a linear minimum mean squared error (LMMSE) estimator based on a Gaussian additive noise condition. It jointly estimates moments, including mean and covariance, using a non-local technique. By using similarities between image patches, this method can effectively filter noise while preserving details. When applied to InSAR phase filtering, three modifications are proposed based on spatial variant noise. First, pixels are adaptively clustered according to their coherence magnitudes. Second, rather than a global estimator, a locally adaptive estimator is used to estimate noise covariance. Third, using the coherence magnitudes as weights, the mean of each cluster is estimated, using a weighted mean to further reduce noise. The performance of the proposed method is experimentally verified using simulated and real data. The results of our study demonstrate that the proposed method is on par or better than the non-local interferometric SAR (NL-InSAR) method.
FALCON: A method for flexible adaptation of local coordinates of nuclei.
König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H; Christiansen, Ove
2016-02-21
We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin. PMID:26896977
New quantitative approaches for classifying and predicting local-scale habitats in estuaries
NASA Astrophysics Data System (ADS)
Valesini, Fiona J.; Hourston, Mathew; Wildsmith, Michelle D.; Coen, Natasha J.; Potter, Ian C.
2010-03-01
This study has developed quantitative approaches for firstly classifying local-scale nearshore habitats in an estuary and then predicting the habitat of any nearshore site in that system. Both approaches employ measurements for a suite of enduring environmental criteria that are biologically relevant and can be easily derived from readily available maps. While the approaches were developed for south-western Australian estuaries, with a focus here on the Swan and Peel-Harvey, they can easily be tailored to any system. Classification of the habitats in each of the above estuaries was achieved by subjecting to hierarchical agglomerative clustering (CLUSTER) and a Similarity Profiles test (SIMPROF), a Manhattan distance matrix constructed from measurements of a suite of enduring criteria recorded at numerous environmentally diverse sites. Groups of sites within the resultant dendogram that were shown by SIMPROF to not contain any significant internal differences, but differ significantly from all other groups in their enduring characteristics, were considered to represent habitat types. The enduring features of the 18 and 17 habitats identified among the 101 and 102 sites in the Swan and Peel-Harvey estuaries, respectively, are presented. The average measurements of the enduring characteristics at each habitat were then used in a novel application of the Linkage Tree (LINKTREE) and SIMPROF routines to produce a "decision tree" for predicting, on the basis of measurements for particular enduring variables, the habitat to which any further site in an estuary is best assigned. In both estuaries, the pattern of relative differences among habitats, as defined by their enduring characteristics, was significantly correlated with that defined by their non-enduring water physico-chemical characteristics recorded seasonally in the field. However, those correlations were substantially higher for the Swan, particularly when salinity was the only water physico-chemical variable
Inhibition screening method of microsomal UGTs using the cocktail approach.
Gradinaru, Julieta; Romand, Stéphanie; Geiser, Laurent; Carrupt, Pierre-Alain; Spaggiari, Dany; Rudaz, Serge
2015-04-25
A rapid method for the simultaneous determination of the in vitro activity of the 10 major human liver UDP-glucuronosyltransferase (UGT) enzymes was developed based on the cocktail approach. Specific substrates were first selected for each UGT: etoposide for UGT1A1, chenodeoxycholic acid for UGT1A3, trifluoperazine for UGT1A4, serotonin for UGT 1A6, isoferulic acid for UGT1A9, codeine for UGT2B4, azidothymidine for UGT2B7, levomedetomidine for UGT2B10, 4-hydroxy-3-methoxymethamphetamine for UGT2B15 and testosterone for UGT2B17. Optimal incubation conditions, including time-based experiments on cocktail metabolism in pooled HLMs that had been performed, were then investigated. A 45-min incubation period was found to be a favorable compromise for all the substrates in the cocktail. Ultra-high pressure liquid chromatography coupled to an electrospray ionization time-of-flight mass spectrometer was used to separate the 10 substrates and their UGT-specific glucuronides in less than 6 min. The ability of the cocktail to highlight the UGT inhibitory potential of xenobiotics was initially proven by using well-known UGT inhibitors (selective and nonselective) and then by relating some of the screening results obtained by using the cocktail approach with morphine glucuronidation (substrate highly glucuronidated by UGT 2B7). All the results were in agreement with both the literature and with the expected effect on morphine glucuronidation. PMID:25684194
Evaluating nursing outcomes: a mixed-methods approach.
Lane-Tillerson, Crystal; Davis, Bertha L; Killion, Cheryl M; Baker, Spencer
2005-12-01
Being overweight is regarded as the most common nutritional disorder of children and adolescents in the United States. The escalating problem of being overweight or being obese in our society indicates the need for treatment strategies that encompass an all-inclusive approach. Moreover, these strategies need to be comprehensively evaluated for their effectiveness. Nurses are in an excellent position to ensure that this occurs. The purpose of this study was to determine whether using a mixed-methods approach was an efficacious way to provide a comprehensive evaluation of the behavior modification benefits of a weight loss/weight management nursing intervention in African-American adolescent girls (13-17 years of age). The overall effectiveness of the intervention was evaluated by analyzing pre- and post-program measures of weight, body mass index, cholesterol, blood pressure, self-esteem, depression, and body image (quantitative data); conducting focus groups with mothers of the participants; and administering open-ended, written questionnaires to the participants (qualitative data). Findings from the quantitative data indicated favorable outcomes in weight, blood pressure, cholesterol, body mass index, self-esteem, and body image, indicating that progress had been made over the course of the program. Furthermore, qualitative data indicated that mothers of the participants observed positive behavioral changes related to eating and exercise patterns and participants demonstrated perception of these changes as well. PMID:16570643
A novel approach for Milne's phase-amplitude method
NASA Astrophysics Data System (ADS)
Simbotin, I.; Shu, D.; Côté, R.
2016-05-01
We have uncovered a linear equation for the envelope function--fully equivalent with the original non-linear equation of Milne's--and have implemented a highly accurate and efficient numerical method for computing the envelope and the associated phase. Consequently, we obtain a high precision parametrization of the wavefunction, within a very economical approach. The key ingredients are: (i) straightforward optimization for smoothness, and (ii) Chebyshev polynomials as the workhorse for solving integro/differential equations. The latter also give a built-in interpolation, and allow for developing numerical tools that are robust, accurate, and convenient. Partial support from the US Army Research Office (Grant No. W911NF-13-1-0213), and from NSF (Grant No. PHY-1415560).
Evaluation of videodisc modules: a mixed method approach.
Parkhurst, P. E.; Lovell, K. L.; Sprafka, S. A.; Hodgins, M.
1991-01-01
The purpose of this study was to evaluate the design and implementation of 10 neuropathology interactive videodisc instructional (IVI) modules used by Michigan State University medical students in the College of Osteopathic Medicine and the College of Human Medicine. The evaluation strategy incorporated a mixed method approach using qualitative and quantitative data to examine levels of student acceptance for the modules; ways in which IVI modules accommodate different learner styles; and to what extent the modules facilitate the attainment of higher level learning objectives. Students rated the units highly for learning effectiveness; many students reported group interaction as beneficial; and students expressed a desire for more IVI in the curriculum. The paper concludes with recommendations for future use of interactive videodisc technology in the teaching/learning process. PMID:1807704
In silico local structure approach: a case study on outer membrane proteins.
Martin, Juliette; de Brevern, Alexandre G; Camproux, Anne-Claude
2008-04-01
The detection of Outer Membrane Proteins (OMP) in whole genomes is an actual question, their sequence characteristics have thus been intensively studied. This class of protein displays a common beta-barrel architecture, formed by adjacent antiparallel strands. However, due to the lack of available structures, few structural studies have been made on this class of proteins. Here we propose a novel OMP local structure investigation, based on a structural alphabet approach, i.e., the decomposition of 3D structures using a library of four-residue protein fragments. The optimal decomposition of structures using hidden Markov model results in a specific structural alphabet of 20 fragments, six of them dedicated to the decomposition of beta-strands. This optimal alphabet, called SA20-OMP, is analyzed in details, in terms of local structures and transitions between fragments. It highlights a particular and strong organization of beta-strands as series of regular canonical structural fragments. The comparison with alphabets learned on globular structures indicates that the internal organization of OMP structures is more constrained than in globular structures. The analysis of OMP structures using SA20-OMP reveals some recurrent structural patterns. The preferred location of fragments in the distinct regions of the membrane is investigated. The study of pairwise specificity of fragments reveals that some contacts between structural fragments in beta-sheets are clearly favored whereas others are avoided. This contact specificity is stronger in OMP than in globular structures. Moreover, SA20-OMP also captured sequential information. This can be integrated in a scoring function for structural model ranking with very promising results. PMID:17932925
Xiao, Jinbiao; Sun, Xiaohan
2012-09-10
A vector mode solver for bending waveguides by using a modified finite-difference (FD) method is developed in a local cylindrical coordinate system, where the perfectly matched layer absorbing boundary conditions are incorporated. Utilizing Taylor series expansion technique and continuity condition of the longitudinal field components, a standard matrix eigenvalue equation without the averaged index approximation approach for dealing with the discrete points neighboring the dielectric interfaces is obtained. Complex effective indexes and field distributions of leaky modes for a typical rib bending waveguide and a silicon wire bend are presented, and solutions accord well with those from the film mode matching method, which shows the validity and utility of the established method. PMID:23037277
NASA Astrophysics Data System (ADS)
Zhang, Hui; Cesnik, Carlos E. S.
2016-04-01
Local interaction simulation approach (LISA) is a highly parallelizable numerical scheme for guided wave simulation in structural health monitoring (SHM). This paper addresses the issue of simulating wave propagation in unbounded domain through the implementation of non-reflective boundary (NRB) in LISA. In this study, two different categories of NRB, i.e., the non-reflective boundary condition (NRBC) and the absorbing boundary layer (ABL), have been investigated in the parallelized LISA scheme. For the implementation of NRBC, a set of general LISA equations considering the effect from boundary stress is obtained first. As a simple example, the Lysmer and Kuhlemeyer (L-K) model is applied here to demonstrate the easiness of NRBC implementation in LISA. As a representative of ABL implementation, the LISA scheme incorporating the absorbing layers with increasing damping (ALID) is also proposed, based on elasto-dynamic equations considering damping effect. Finally, an effective hybrid model combining L-K and ALID methods in LISA is developed, and guidelines for implementing the hybrid model is presented. Case studies on a three-dimensional plate model compares the performance of hybrid method to that of L-K and ALID acting independently. The simulation results demonstrate that best absorbing efficiency is achieved with the hybrid method.
A Multiscale Constraints Method Localization of 3D Facial Feature Points
Li, Hong-an; Zhang, Yongxin; Li, Zhanli; Li, Huilin
2015-01-01
It is an important task to locate facial feature points due to the widespread application of 3D human face models in medical fields. In this paper, we propose a 3D facial feature point localization method that combines the relative angle histograms with multiscale constraints. Firstly, the relative angle histogram of each vertex in a 3D point distribution model is calculated; then the cluster set of the facial feature points is determined using the cluster algorithm. Finally, the feature points are located precisely according to multiscale integral features. The experimental results show that the feature point localization accuracy of this algorithm is better than that of the localization method using the relative angle histograms. PMID:26539244
A Micro-delivery Approach for Studying Microvascular Responses to Localized Oxygen Delivery
Ghonaim, Nour W.; Lau, Leo W. M.; Goldman, Daniel; Ellis, Christopher G.; Yang, Jun
2011-01-01
In vivo video microscopy has been used to study blood flow regulation as a function of varying oxygen concentration in microcirculatory networks. However, previous studies have measured the collective response of stimulating large areas of the microvascular network at the tissue surface. Objective We aim to limit the area being stimulated by controlling oxygen availability to highly localized regions of the microvascular bed within intact muscle. Design and Method Gas of varying O2 levels was delivered to specific locations on the surface of the Extensor Digitorum Longus muscle of rat through a set of micro-outlets (100 μm diameter) patterned in ultrathin glass using state-of-the-art microfabrication techniques. O2 levels were oscillated and digitized video sequences were processed for changes in capillary hemodynamics and erythrocyte O2 saturation. Results and Conclusions Oxygen saturations in capillaries positioned directly above the micro-outlets were closely associated with the controlled local O2 oscillations. Radial diffusion from the micro-outlet is limited to ~75 μm from the center as predicted by computational modelling and as measured in vivo. These results delineate a key step in the design of a novel micro-delivery device for controlled oxygen delivery to the microvasculature to understand fundamental mechanisms of microvascular regulation of O2 supply. PMID:21914035
A Public Policy Approach to Local Models of HIV/AIDS Control in Brazil
de Assis, Andreia; Costa-Couto, Maria-Helena; Thoenig, Jean-Claude; Fleury, Sonia; de Camargo, Kenneth; Larouzé, Bernard
2009-01-01
Objectives. We investigated involvement and cooperation patterns of local Brazilian AIDS program actors and the consequences of these patterns for program implementation and sustainability. Methods. We performed a public policy analysis (documentary analysis, direct observation, semistructured interviews of health service and nongovernmental organization [NGO] actors) in 5 towns in 2 states, São Paulo and Pará. Results. Patterns suggested 3 models. In model 1, local government, NGOs, and primary health care services were involved in AIDS programs with satisfactory response to new epidemiological trends but a risk that HIV/AIDS would become low priority. In model 2, mainly because of NGO activism, HIV/AIDS remained an exceptional issue, with limited responses to new epidemiological trends and program sustainability undermined by political instability. In model 3, involvement of public agencies and NGOs was limited, with inadequate response to epidemiological trends and poor mobilization threatening program sustainability. Conclusions. Within a common national AIDS policy framework, the degree of involvement and cooperation between public and NGO actors deeply impacts population coverage and program sustainability. Specific processes are required to maintain actor mobilization without isolating AIDS programs. PMID:19372523
Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong
2014-01-01
For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243
2014-01-01
For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243
Anderson, R W; Pember, R B; Elliott, N S
2001-10-22
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.
Analysing clinical reasoning characteristics using a combined methods approach
2013-01-01
Background Despite a major research focus on clinical reasoning over the last several decades, a method of evaluating the clinical reasoning process that is both objective and comprehensive is yet to be developed. The aim of this study was to test whether a dual approach, using two measures of clinical reasoning, the Clinical Reasoning Problem (CRP) and the Script Concordance Test (SCT), provides a valid, reliable and targeted analysis of clinical reasoning characteristics to facilitate the development of diagnostic thinking in medical students. Methods Three groups of participants, general practitioners, and third and fourth (final) year medical students completed 20 on-line clinical scenarios -10 in CRP and 10 in SCT format. Scores for each format were analysed for reliability, correlation between the two formats and differences between subject-groups. Results Cronbach’s alpha coefficient ranged from 0.36 for SCT 1 to 0.61 for CRP 2, Statistically significant correlations were found between the mean f-score of the CRP 2 and total SCT 2 score (0.69); and between the mean f-score for all CRPs and all mean SCT scores (0.57 and 0.47 respectively). The pass/fail rates of the SCT and CRP f-score are in keeping with the findings from the correlation analysis (i.e. 31% of students (11/35) passed both, 26% failed both, and 43% (15/35) of students passed one but not the other test), and suggest that the two formats measure overlapping but not identical characteristics. One-way ANOVA showed consistent differences in scores between levels of expertise with these differences being significant or approaching significance for the CRPs. Conclusion SCTs and CRPs are overlapping and complementary measures of clinical reasoning. Whilst SCTs are more efficient to administer, the use of both measures provides a more comprehensive appraisal of clinical skills than either single measure alone, and as such could potentially facilitate the customised teaching of clinical reasoning for
An approach to the damping of local modes of oscillations resulting from large hydraulic transients
Dobrijevic, D.M.; Jankovic, M.V.
1999-09-01
A new method of damping of local modes of oscillations under large disturbance is presented in this paper. The digital governor controller is used. Controller operates in real time to improve the generating unit transients through the guide vane position and the runner blade position. The developed digital governor controller, whose control signals are adjusted using the on-line measurements, offers better damping effects for the generator oscillations under large disturbances than the conventional controller. Digital simulations of hydroelectric power plant equipped with low-head Kaplan turbine are performed and the comparisons between the digital governor control and the conventional governor control are presented. Simulation results show that the new controller offers better performances, than the conventional controller, when the system is subjected to large disturbances.
Ion microscopy: a new approach for subcellular localization of labelled molecules
Hindie, E.; Hallegot, P.; Chabala, J.M.; Thorne, N.A.; Coulomb, B.; Levi-Setti, R.; Galle, P.
1988-12-01
Secondary ion mass spectroscopy (SIMS) was used to obtain images representing the intracellular distribution of molecules labelled with carbon 14. Deoxyadenosine labelled with carbon 14 was added to a cultured human fibroblast cell medium, and the intracellular distribution of this molecule was studied using three different SIMS instruments: the CAMECA IMS 3F and SMI 300 ion microscopes and the UC-HRL scanning ion microprobe. Carbon 14 distribution images obtained by this method show that deoxyadenosine U-C14 is present in the cytoplasm as well as the nucleus, with a higher concentration in the nucleoli. Our study clearly demonstrates that ion microscopy is well suited for carbon 14 detection and localization at the subcellular level, permitting a wide variety of microanalytical tracer experiments.
Ultrastructural localization of intracellular calcium stores by a new cytochemical method.
Poenie, M; Epel, D
1987-09-01
We describe a new cytochemical method for ultrastructural localization of intracellular calcium stores. This method uses fluoride ions for in situ precipitation of intracellular calcium during fixation. Comparisons made using oxalate, antimonate, or fluoride showed that fluoride was clearly superior for intracellular calcium localization in eggs of the sea urchin Strongylocentrotus purpuratus. Whereas oxalate generally gave no intracellular precipitate and antimonate gave copious but random precipitate, three prominent calcium stores were detected using fluoride: the tubular endoplasmic reticulum, the cortical granules, and large, clear, acidic vesicles of unknown function. The mitochondria of these eggs generally showed no detectable calcium deposits. X-ray spectra confirmed the presence of calcium in the fluoride precipitates, although in some cases magnesium was also detected. Rat skeletal muscle and sea urchin sperm were used to test the reliability of the fluoride method for calcium localization. In rat skeletal muscle, most fluoride precipitate was confined to the sarcoplasmic reticulum. Using sea urchin sperm, which transport calcium into the mitochondria after exposure to egg jelly to induce the acrosome reaction, the expected result was also obtained. Before the acrosome reaction, sperm mitochondria contain no detectable calcium-containing precipitate. Within 4 min after induction of the acrosome reaction, the expected result was also obtained. Before the acrosome reaction, sperm mitochondria displayed many foci of calcium-containing precipitate. The use of fluoride for intracellular calcium localization therefore appears to be a substantial improvement over previous cytochemical methods. PMID:3611737
Localized axial Green's function method for the convection-diffusion equations in arbitrary domains
NASA Astrophysics Data System (ADS)
Lee, Wanho; Kim, Do Wan
2014-10-01
A localized axial Green's function method (LAGM) is proposed for the convection-diffusion equation. The axial Green's function method (AGM) enables us to calculate the numerical solution of a multi-dimensional problem using only one-dimensional Green's functions for the axially split differential operators. This AGM has been developed not only for the elliptic boundary value problems but also for the steady Stokes flows, however, this paper is concerned with the localization of the AGM. This localization of the method is needed for practical purpose when computing the axial Green's function, specifically for the convection-diffusion equation on a line segment that we call the local axial line. Although our focus is mainly on the convection-dominated cases in arbitrary domains, this method can solve other cases in a unified way. Numerical results show that, despite irregular types of discretization on an arbitrary domain, we can calculate the numerical solutions using the LAGM without loss of accuracy even in cases of large convection. In particular, it is also shown that randomly distributed axial lines are available in our LAGM and complicated domains are not a burden.
Eulerian-Lagrangian localized adjoint methods for reactive transport in groundwater
Ewing, R.E.; Wang, Hong
1996-12-31
In this paper, we present Eulerian-Lagrangian localized adjoint methods (ELLAM) to solve convection-diffusion-reaction equations governing contaminant transport in groundwater flowing through an adsorbing porous medium. These ELLAM schemes can treat various combinations of boundary conditions and conserve mass. Numerical results are presented to demonstrate the strong potential of ELLAM schemes.
Local Mesh Refinement in the Space-Time CE/SE Method
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Wu, Yuhui; Wang, Xiao-Yen; Yang, Vigor
2000-01-01
A local mesh refinement procedure for the CE/SE method which does not use an iterative procedure in the treatments of grid-to-grid communications is described. It is shown that a refinement ratio higher than ten can be applied successfully across a single coarse grid/fine grid interface.
Wei, Chao; Luo, Senlin; Ma, Xincheng; Ren, Hao; Zhang, Ji; Pan, Limin
2016-01-01
Topic models and neural networks can discover meaningful low-dimensional latent representations of text corpora; as such, they have become a key technology of document representation. However, such models presume all documents are non-discriminatory, resulting in latent representation dependent upon all other documents and an inability to provide discriminative document representation. To address this problem, we propose a semi-supervised manifold-inspired autoencoder to extract meaningful latent representations of documents, taking the local perspective that the latent representation of nearby documents should be correlative. We first determine the discriminative neighbors set with Euclidean distance in observation spaces. Then, the autoencoder is trained by joint minimization of the Bernoulli cross-entropy error between input and output and the sum of the square error between neighbors of input and output. The results of two widely used corpora show that our method yields at least a 15% improvement in document clustering and a nearly 7% improvement in classification tasks compared to comparative methods. The evidence demonstrates that our method can readily capture more discriminative latent representation of new documents. Moreover, some meaningful combinations of words can be efficiently discovered by activating features that promote the comprehensibility of latent representation. PMID:26784692
Wei, Chao; Luo, Senlin; Ma, Xincheng; Ren, Hao; Zhang, Ji; Pan, Limin
2016-01-01
Topic models and neural networks can discover meaningful low-dimensional latent representations of text corpora; as such, they have become a key technology of document representation. However, such models presume all documents are non-discriminatory, resulting in latent representation dependent upon all other documents and an inability to provide discriminative document representation. To address this problem, we propose a semi-supervised manifold-inspired autoencoder to extract meaningful latent representations of documents, taking the local perspective that the latent representation of nearby documents should be correlative. We first determine the discriminative neighbors set with Euclidean distance in observation spaces. Then, the autoencoder is trained by joint minimization of the Bernoulli cross-entropy error between input and output and the sum of the square error between neighbors of input and output. The results of two widely used corpora show that our method yields at least a 15% improvement in document clustering and a nearly 7% improvement in classification tasks compared to comparative methods. The evidence demonstrates that our method can readily capture more discriminative latent representation of new documents. Moreover, some meaningful combinations of words can be efficiently discovered by activating features that promote the comprehensibility of latent representation. PMID:26784692
Local moment approach as a quantum impurity solver for the Hubbard model
NASA Astrophysics Data System (ADS)
Barman, Himadri
2016-07-01
The local moment approach (LMA) has presented itself as a powerful semianalytical quantum impurity solver (QIS) in the context of the dynamical mean-field theory (DMFT) for the periodic Anderson model and it correctly captures the low-energy Kondo scale for the single impurity model, having excellent agreement with the Bethe ansatz and numerical renormalization group (NRG) results. However, the most common correlated lattice model, the Hubbard model, has not been explored well within the LMA+DMFT framework beyond the insulating phase. Here in our work, within the framework we complete the filling-interaction phase diagram of the single band Hubbard model at zero temperature. Our formalism is generic to any particle filling and can be extended to finite temperature. We contrast our results with another QIS, namely the iterated perturbation theory (IPT) and show that the second spectral moment sum rule improves better as the Hubbard interaction strength grows stronger in LMA, whereas it severely breaks down after the Mott transition in IPT. For the metallic case, the Fermi liquid (FL) scaling agreement with the NRG spectral density supports the fact that the FL scale emerges from the inherent Kondo physics of the impurity model. We also show that, in the metallic phase, the FL scaling of the spectral density leads to universality which extends to infinite frequency range at infinite correlation strength (strong coupling). At large interaction strength, the off half-filling spectral density forms a pseudogap near the Fermi level and filling-controlled Mott transition occurs as one approaches the half-filling. As a response property, we finally study the zero temperature optical conductivity and find universal features such as absorption peak position governed by the FL scale and a doping independent crossing point, often dubbed the isosbestic point in experiments.
Biodiversity Monitoring at the Tonle Sap Lake of Cambodia: A Comparative Assessment of Local Methods
NASA Astrophysics Data System (ADS)
Seak, Sophat; Schmidt-Vogt, Dietrich; Thapa, Gopal B.
2012-10-01
This paper assesses local biodiversity monitoring methods practiced in the Tonle Sap Lake of Cambodia. For the assessment we used the following criteria: methodological rigor, perceived cost, ease of use (user friendliness), compatibility with existing activities, and effectiveness of intervention. Constraints and opportunities for execution of the methods were also considered. Information was collected by use of: (1) key informant interview, (2) focus group discussion, and (3) researcher's observation. The monitoring methods for fish, birds, reptiles, mammals and vegetation practiced in the research area have their unique characteristics of generating data on biodiversity and biological resources. Most of the methods, however, serve the purpose of monitoring biological resources rather than biodiversity. There is potential that the information gained through local monitoring methods can provide input for long-term management and strategic planning. In order to realize this potential, the local monitoring methods should be better integrated with each other, adjusted to existing norms and regulations, and institutionalized within community-based organization structures.
Intraoral approach for reduction malarplasty: a simple method.
Lee, Jin-Gew; Park, Young-Wook
2003-01-01
Young Korean women with prominent zygoma may experience stress in daily life because the Oriental physiognomy often associates prominent zygoma with bad luck. Moreover, prominent zygoma in a wide Oriental face has the effect of making a person appear older and stubborn. Zygomatic reduction is often necessary to relieve stress from self-consciousness about facial appearance and to obtain younger and softer features. As such, most zygomatic procedures are cosmetic; therefore, an entirely intraoral approach with no skin incision is desirable. The current operative method of zygomatic reduction consists of two steps. The zygomatic body and arch are exposed through a mucoperiosteal incision from the maxillary canine to the first molar area. The first step is to grind and file the zygomatic body. The second step is made on the zygomatic arch. Using an oscillating saw, a partial-thickness osteotomy is made just posterior to the orbital rim, and a full-thickness osteotomy is made just anterior to the articular tubercle of the zygomatic arch. Light pressure on the posterior part of the arch produces a greenstick fracture of the anterior osteotomy site and a complete fracture of the posterior osteotomy site, resulting in inward repositioning of the zygomatic arch. This method of zygomatic reduction is simple, easy, effective, and leaves no conspicuous scars on the face. PMID:12496618
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point’s position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464
A local approach to reduce industrial uranium wound contamination in rats.
Houpert, P; Chazel, V; Paquet, F
2004-02-01
The aim of this work is to develop a new approach to partially decontaminate wounds after industrial uranium contamination, during the interval of time between contamination and transfer of the patient to the infirmary. A wound dressing and a paste mixed or not with uranium-chelating ligands, ethane-1-hydroxy-1,1-bisphosphonate (EHBP) and carballylic amido bis phosphonic acid (CAPBP), were tested in vitro on muscles and in vivo on rats after deposit of uranium oxide compounds. The dressing and the paste, composed of carboxymethylcellulose-based hydrocolloids known to be highly absorbent, were applied on simulated wounds a few minutes after the contamination. The incorporation of chelating ligands did not improve the efficacy of the dressing or paste, and the best results were obtained with the dressing. In vivo, after 1 h of contact with the wound, the dressing absorbed about 30% and 60% of a UO4 compound deposited intra- and intermuscularly, respectively. After intramuscular deposit, the efficacy of the dressing was not reduced if the contact time decreased from 1 h to 15 min. Therefore, this wound dressing could be a practical option to treat uranium-contaminated wounds, but its efficacy depends on the localization of the uranium deposit. PMID:15052287
Baryon states with open beauty in the extended local hidden gauge approach
NASA Astrophysics Data System (ADS)
Liang, W. H.; Xiao, C. W.; Oset, E.
2014-03-01
In this paper, we examine the interaction of B stretchy="false">¯N, B stretchy="false">¯Δ, B stretchy="false">¯*N, and B stretchy="false">¯*Δ states, together with their coupled channels, by using a mapping from the light meson sector. The assumption that the heavy quarks act as spectators at the quark level automatically leads us to the results of the heavy quark spin symmetry for pion exchange and reproduces the results of the Weinberg Tomozawa term, coming from light vector exchanges in the extended local hidden gauge approach. With this dynamics we look for states dynamically generated from the interaction and find two states with nearly zero width, which we associate to the Λb(5912) and Λb(5920) states. The states couple mostly to B stretchy="false">¯*N, which are degenerate with the Weinberg Tomozawa interaction. The difference of masses between these two states, with J =1/2 and 3/2, respectively, is due to pion exchange connecting these states to intermediate B stretchy="false">¯N states. In addition to these two Λb states, we find three more states with I =0, one of them nearly degenerate in two states of J =1/2, 3/2. Furthermore, we also find eight more states in I =1, two of them degenerate in J =1/2, 3/2, and another two degenerate in J =1/2, 3/2, 5/2.
Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.; Catalyurek, Umit V.; Kurc, Tahsin; Sadayappan, Ponnuswamy; Saltz, Joel H.
2009-08-01
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisions are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.
Acoustic flight tests of rotorcraft noise-abatement approaches using local differential GPS guidance
NASA Technical Reports Server (NTRS)
Chen, Robert T. N.; Hindson, William S.; Mueller, Arnold W.
1995-01-01
This paper presents the test design, instrumentation set-up, data acquisition, and the results of an acoustic flight experiment to study how noise due to blade-vortex interaction (BVI) may be alleviated. The flight experiment was conducted using the NASA/Army Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) research helicopter. A Local Differential Global Positioning System (LDGPS) was used for precision navigation and cockpit display guidance. A laser-based rotor state measurement system on board the aircraft was used to measure the main rotor tip-path-plane angle-of-attack. Tests were performed at Crows Landing Airfield in northern California with an array of microphones similar to that used in the standard ICAO/FAA noise certification test. The methodology used in the design of a RASCAL-specific, multi-segment, decelerating approach profile for BVI noise abatement is described, and the flight data pertaining to the flight technical errors and the acoustic data for assessing the noise reduction effectiveness are reported.
Hidden beauty baryon states in the local hidden gauge approach with heavy quark spin symmetry
NASA Astrophysics Data System (ADS)
Xiao, C. W.; Oset, E.
2013-11-01
Using a coupled-channel unitary approach, combining the heavy quark spin symmetry and the dynamics of the local hidden gauge, we investigate the meson-baryon interaction with hidden beauty and obtain several new states of N around 11 GeV. We consider the basis of states η b N, ϒN, BΛ b , BΣ b , B * Λ b , B * Σ b , B * Σ {/b *} and find four basic bound states which correspond to BΣ b , BΣ {/b *}, B * Σ b and B * Σ {/b *}, decaying mostly into η b N and ϒN and with a binding energy about 50-130 MeV with respect to the thresholds of the corresponding channel. All of them have isospin I = 1/2 , and we find no bound states or resonances in I = 3/2 . The BΣ b state appears in J = 1/2 , the BΣ {/b *} in J = 3/2 , the B * Σ b appears nearly degenerate in J = 1/2 , 3/2 and the B * Σ {/b *} appears nearly degenerate in J = 1/2 , 3/2, 5/2. These states have a width from 2-110 MeV, with conservative estimates of uncertainties, except for the one in J = 5/2 which has zero width since it cannot decay into any of the states of the basis chosen. We make generous estimates of the uncertainties and find that within very large margins these states appear bound.
Waves on Thin Plates: A New (Energy Based) Method on Localization
NASA Astrophysics Data System (ADS)
Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Lengliné, Olivier; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut
2016-04-01
Noisy acoustic signal localization is a difficult problem having a wide range of application. We propose a new localization method applicable for thin plates which is based on energy amplitude attenuation and inversed source amplitude comparison. This inversion is tested on synthetic data using a direct model of Lamb wave propagation and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers, 1 - 26 kHz frequency range). We compare the performance of this technique with classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. The experimental setup consist of a glass / plexiglass plate having dimensions of 80 cm x 40 cm x 1 cm equipped with four accelerometers and an acquisition card. Signals are generated using a steel, glass or polyamide ball (having different sizes) quasi perpendicular hit (from a height of 2-3 cm) on the plate. Signals are captured by sensors placed on the plate on different locations. We measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, array geometry, signal to noise ratio and computational time. We show that this new technique, which is very versatile, works better than conventional techniques over a range of sampling rates 8 kHz - 1 MHz. It is possible to have a decent resolution (3cm mean error) using a very cheap equipment set. The numerical simulations allow us to track the contributions of different error sources in different methods. The effect of the reflections is also included in our simulation by using the imaginary sources outside the plate boundaries. This proposed method can easily be extended for applications in three dimensional environments, to monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).
[Methodical approaches to usage of complex anthropometric methods in clinical practice].
Bukavneva, N S; Pozdniakov, A L; Nikitiuk, D B
2007-01-01
The new methodical approach of complex anthropometric study in clinical practice has been proposed for evaluation of nutritional state, dyagnostics and effectiveness of dietotherapy of patients with alimentary-depended pathology. The technique of body's voluminous size measurements, adipose folds measurements by means of caliper, extremities diameter measurements has been described, which would allow to receive more precise data during patients examinations. Formulas which allow to calculate the amount of bone, muscular and adipose mass been provided. PMID:18219935
ERIC Educational Resources Information Center
Araboglou, Argy
1993-01-01
Asserts that students have little knowledge about the operation of local government. Discusses a three-day interdisciplinary lesson about water management and local government for the elementary grades. Includes descriptions of laboratory exercises, homework assignments, and class discussions. (CFR)
Distortion Correction in EPI Using an Extended PSF Method with a Reversed Phase Gradient Approach
In, Myung-Ho; Posnansky, Oleg; Beall, Erik B.; Lowe, Mark J.; Speck, Oliver
2015-01-01
In echo-planar imaging (EPI), such as commonly used for functional MRI (fMRI) and diffusion-tensor imaging (DTI), compressed distortion is a more difficult challenge than local stretching as spatial information can be lost in strongly compressed areas. In addition, the effects are more severe at ultra-high field (UHF) such as 7T due to increased field inhomogeneity. To resolve this problem, two EPIs with opposite phase-encoding (PE) polarity were acquired and combined after distortion correction. For distortion correction, a point spread function (PSF) mapping method was chosen due to its high correction accuracy and extended to perform distortion correction of both EPIs with opposite PE polarity thus reducing the PSF reference scan time. Because the amount of spatial information differs between the opposite PE datasets, the method was further extended to incorporate a weighted combination of the two distortion-corrected images to maximize the spatial information content of a final corrected image. The correction accuracy of the proposed method was evaluated in distortion-corrected data using both forward and reverse phase-encoded PSF reference data and compared with the reversed gradient approaches suggested previously. Further we demonstrate that the extended PSF method with an improved weighted combination can recover local distortions and spatial information loss and be applied successfully not only to spin-echo EPI, but also to gradient-echo EPIs acquired with both PE directions to perform geometrically accurate image reconstruction. PMID:25707006
ESTELA: a method for evaluating the source and travel time of the wave energy reaching a local area
NASA Astrophysics Data System (ADS)
Pérez, Jorge; Méndez, Fernando J.; Menéndez, Melisa; Losada, Inigo J.
2014-08-01
The description of wave climate at a local scale is of paramount importance for offshore and coastal engineering applications. Conditions influencing wave characteristics at a specific location cannot, however, be fully understood by studying only local information. It is necessary to take into account the dynamics of the ocean surface over a large `upstream' wave generation area. The goal of this work is to provide a methodology to easily characterize the area of influence of any particular ocean location worldwide. Moreover, the developed method is able to characterize the wave energy and travel time in that area. The method is based on a global scale analysis using both geographically and physically based criteria. The geographic criteria rely on the assumption that deep water waves travel along great circle paths. This limits the area of influence by neglecting energy that cannot reach a target point, as its path is blocked by land. The individual spectral partitions from a global wave reanalysis are used to reconstruct the spectral information and apply the physically based criteria. The criteria are based on the selection of the fraction of energy that travels towards the target point for each analysed grid point. The method has been tested on several locations worldwide. Results provide maps that inform about the relative importance of different oceanic areas to the local wave climate at any target point. This information cannot be inferred from local parameters and agrees with information from other approaches. The methodology may be useful in a number of applications, such as statistical downscaling, storm tracking and grid definition in numerical modelling.
Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations
Jo, Y.; Yun, S.; Cho, N. Z.
2013-07-01
In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)
Pathak, Anuradha; Bajwa, Navroop Kaur; Kalaskar, Ritesh
2015-01-01
ABSTRACT This paper reports case of pediatric localized gingival recession (LGR) in mandibular anterior region which was treated by using new innovative surgical approach, i.e. combination of frenectomy and vestibular extension. These interceptive surgeries not only gained sufficient width of attached gingival but also lower the attachment of labial frenum. How to cite this article: Jingarwar M, Pathak A, Bajwa NK, Kalaskar R. Vestibular Extension along with Frenectomy in Management of Localized Gingival Recession in Pediatric Patient: A New Innovative Surgical Approach. Int J Clin Pediatr Dent 2015;8(3):224-226. PMID:26604542
Approaching complexity by stochastic methods: From biological systems to turbulence
NASA Astrophysics Data System (ADS)
Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.
2011-09-01
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.
NASA Astrophysics Data System (ADS)
Xu, Ling; Cheng, Xuan; Dai, Chao-Qing
2015-12-01
Although the mapping method based on Riccati equation was proposed to obtain variable separation solutions many years ago, two important problems have not been studied: i) the equivalence of variable separation solutions by means of the mapping method based on Riccati equation with the radical sign combined ansatz; and ii) lack of physical meanings for some localized structures constructed by variable separation solutions. In this paper, we re-study the (2+1)-dimensional Boiti-Leon-Pempinelli equation via the mapping method based on Riccati equation and prove that nine types of variable separation solutions are actually equivalent to each other. Moreover, we also re-study localized structures constructed by variable separation solutions. Results indicate that some localized structures reported in the literature are lacking real values due to the appearance of the divergent and un-physical phenomenon for the initial field. Therefore, we must be careful with the initial field to avoid the appearance of some un-physical or even divergent structures in it when we construct localized structures for the potential field.
NASA Astrophysics Data System (ADS)
Hallez, Hans; Vanrumste, Bart; Van Hese, Peter; D'Asseler, Yves; Lemahieu, Ignace; Van de Walle, Rik
2005-08-01
Many implementations of electroencephalogram (EEG) dipole source localization neglect the anisotropical conductivities inherent to brain tissues, such as the skull and white matter anisotropy. An examination of dipole localization errors is made in EEG source analysis, due to not incorporating the anisotropic properties of the conductivity of the skull and white matter. First, simulations were performed in a 5 shell spherical head model using the analytical formula. Test dipoles were placed in three orthogonal planes in the spherical head model. Neglecting the skull anisotropy results in a dipole localization error of, on average, 13.73 mm with a maximum of 24.51 mm. For white matter anisotropy these values are 11.21 mm and 26.3 mm, respectively. Next, a finite difference method (FDM), presented by Saleheen and Kwong (1997 IEEE Trans. Biomed. Eng. 44 800-9), is used to incorporate the anisotropy of the skull and white matter. The FDM method has been validated for EEG dipole source localization in head models with all compartments isotropic as well as in a head model with white matter anisotropy. In a head model with skull anisotropy the numerical method could only be validated if the 3D lattice was chosen very fine (grid size <=2 mm).
The diffuse-scattering method for investigating locally ordered binary solid solutions
Epperson, J.E. ); Anderson, J.P. ); Chen, H. . Materials Science and Engineering Dept.)
1994-01-01
Diffuse-scattering investigations comprise a series of maturing methods for detailed characterization of the local-order structure and atomic displacements of binary alloy systems. The distribution of coherent diffuse scattering is determined by the local atomic ordering, and analytical techniques are available for extracting the relevant structural information. An extension of such structural investigations, for locally ordered alloys at equilibrium, allows one to obtain pairwise interaction energies. Having experimental pairwise interaction energies for the various coordination shells offers one the potential for more realistic kinetic Ising modeling of alloy systems as they relax toward equilibrium. Although the modeling of atomic displacements in conjunction with more conventional studies of chemical ordering is in its infancy, the method appears to offer considerable promise for revealing additional information about the strain fields in locally ordered and clustered alloys. The diffuse-scattering methods for structural characterization and for the recovery of interaction energies are reviewed, and some preliminary results are used to demonstrate the potential of the kinetic Ising modeling technique to follow the evolution of ordering or phase separation in an alloy system.
ERIC Educational Resources Information Center
Wise, Dena; Sneed, Christopher; Velandia, Margarita; Berry, Ann; Rhea, Alice; Fairhurst, Ann
2013-01-01
The Local Table project compared results from parallel surveys of consumers and restaurateurs regarding local food purchasing and use. Results were also compared with producers' perception of, capacity for and participation in direct marketing through local venues, on-farm outlets, and restaurants. The surveys found consumers' and…
NASA Astrophysics Data System (ADS)
Sato, Takeshi; Nakai, Hiromi
2009-12-01
A new method to calculate the atom-atom dispersion coefficients in a molecule is proposed for the use in density functional theory with dispersion (DFT-D) correction. The method is based on the local response approximation due to Dobson and Dinte [Phys. Rev. Lett. 76, 1780 (1996)], with modified dielectric model recently proposed by Vydrov and van Voorhis [J. Chem. Phys. 130, 104105 (2009)]. The local response model is used to calculate the distributed multipole polarizabilities of atoms in a molecule, from which the dispersion coefficients are obtained by an explicit frequency integral of the Casimir-Polder type. Thus obtained atomic polarizabilities are also used in the damping function for the short-range singularity. Unlike empirical DFT-D methods, the local response dispersion (LRD) method is able to calculate the dispersion energy from the ground-state electron density only. It is applicable to any geometry, free from physical constants such as van der Waals radii or atomic polarizabilities, and computationally very efficient. The LRD method combined with the long-range corrected DFT functional (LC-BOP) is applied to calculations of S22 weakly bound complex set [Phys. Chem. Chem. Phys. 8, 1985 (2006)]. Binding energies obtained by the LC-BOP+LRD agree remarkably well with ab initio references.
NASA Astrophysics Data System (ADS)
Sato, Takeshi; Nakai, Hiromi
2009-12-01
A new method to calculate the atom-atom dispersion coefficients in a molecule is proposed for the use in density functional theory with dispersion (DFT-D) correction. The method is based on the local response approximation due to Dobson and Dinte [Phys. Rev. Lett. 76, 1780 (1996)], with modified dielectric model recently proposed by Vydrov and van Voorhis [J. Chem. Phys. 130, 104105 (2009)]. The local response model is used to calculate the distributed multipole polarizabilities of atoms in a molecule, from which the dispersion coefficients are obtained by an explicit frequency integral of the Casimir-Polder type. Thus obtained atomic polarizabilities are also used in the damping function for the short-range singularity. Unlike empirical DFT-D methods, the local response dispersion (LRD) method is able to calculate the dispersion energy from the ground-state electron density only. It is applicable to any geometry, free from physical constants such as van der Waals radii or atomic polarizabilities, and computationally very efficient. The LRD method combined with the long-range corrected DFT functional (LC-BOP) is applied to calculations of S22 weakly bound complex set [Phys. Chem. Chem. Phys. 8, 1985 (2006)]. Binding energies obtained by the LC-BOP + LRD agree remarkably well with ab initio references.
Effects of the decellularization method on the local stiffness of acellular lungs.
Melo, Esther; Garreta, Elena; Luque, Tomas; Cortiella, Joaquin; Nichols, Joan; Navajas, Daniel; Farré, Ramon
2014-05-01
Lung bioengineering, a novel approach to obtain organs potentially available for transplantation, is based on decellularizing donor lungs and seeding natural scaffolds with stem cells. Various physicochemical protocols have been used to decellularize lungs, and their performance has been evaluated in terms of efficient decellularization and matrix preservation. No data are available, however, on the effect of different decellularization procedures on the local stiffness of the acellular lung. This information is important since stem cells directly sense the rigidity of the local site they are engrafting to during recellularization, and it has been shown that substrate stiffness modulates cell fate into different phenotypes. The aim of this study was to assess the effects of the decellularization procedure on the inhomogeneous local stiffness of the acellular lung on five different sites: alveolar septa, alveolar junctions, pleura, and vessels' tunica intima and tunica adventitia. Local matrix stiffness was measured by computing Young's modulus with atomic force microscopy after decellularizing the lungs of 36 healthy rats (Sprague-Dawley, male, 250-300 g) with four different protocols with/without perfusion through the lung circulatory system and using two different detergents (sodium dodecyl sulfate [SDS] and 3-[(3-cholamidopropyl) dimethylammonio]-1-propanesulfonate [CHAPS]). The local stiffness of the acellular lung matrix significantly depended on the site within the matrix (p<0.001), ranging from ∼ 15 kPa at the alveolar septum to ∼ 60 kPa at the tunica intima. Acellular lung stiffness (p=0.003) depended significantly, albeit modestly, on the decellularization process. Whereas perfusion did not induce any significant differences in stiffness, the use of CHAPS resulted in a ∼ 35% reduction compared with SDS, the influence of the detergent being more important in the tunica intima. In conclusion, lung matrix stiffness is considerably inhomogeneous, and
2011-01-01
Background Costly efforts have been invested to control and prevent cardiovascular diseases (CVD) and their risk factors but the ideal solutions for low resource settings remain unclear. This paper aims at summarising our approaches to implementing a programme on hypertension management in a rural commune of Vietnam. Methods In a rural commune, a programme has been implemented since 2006 to manage hypertensive people at the commune health station and to deliver health education on CVD risk factors to the entire community. An initial cross-sectional survey was used to screen for hypertensives who might enter the management programme. During 17 months of implementation, other people with hypertension were also followed up and treated. Data were collected from all individual medical records, including demographic factors, behavioural CVD risk factors, blood pressure levels, and number of check-ups. These data were analysed to identify factors relating to adherence to the management programme. Results Both top-down and bottom-up approaches were applied to implement a hypertension management programme. The programme was able to run independently at the commune health station after 17 months. During the implementation phase, 497 people were followed up with an overall regular follow-up of 65.6% and a dropout of 14.3%. Severity of hypertension and effectiveness of treatment were the main factors influencing the decision of people to adhere to the management programme, while being female, having several behavioural CVD risk factors or a history of chronic disease were the predictors for deviating from the programme. Conclusion Our model showed the feasibility, applicability and future potential of a community-based model of comprehensive hypertension care in a low resource context using both top-down and bottom-up approaches to engage all involved partners. This success also highlighted the important roles of both local authorities and a cardiac care network, led by an
Bernhardt, Sylvain; Nicolau, Stéphane A; Agnus, Vincent; Soler, Luc; Doignon, Christophe; Marescaux, Jacques
2016-05-01
The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data. We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach. PMID:26925804
NASA Astrophysics Data System (ADS)
Revil, A.; Barnier, G.; Karaoulis, M.; Sava, P.; Jardani, A.; Kulessa, B.
2014-02-01
The seismoelectric method is based on the interpretation of the electrical field associated with the conversion of mechanical to electromagnetic energy during the propagation of seismic waves in heterogeneous porous media. We propose an extension of a poroacoustic model that takes into account fluid flow and the effect of saturation. This model is coupled with an electrokinetic model accounting for the effect of saturation and in agreement with available experimental data in sands and carbonate rocks. We also developed new scaling laws for the permeability, the streaming potential coupling coefficient and the capillary entry pressure of porous media. The theory is developed for frequencies much below the critical frequency at which inertial effects starts to dominate in the Navier-Stokes equation (>10 kHz). The equations used to compute the propagation of the P waves and the seismoelectric effect in unsaturated condition are solved with finite elements using triangular meshing. We demonstrate the usefulness of a recently developed technique, seismoelectric beamforming, to localize saturation fronts by focusing seismic waves and looking at the resulting seismoelectric conversions. This method is applied to a cross-hole problem showing how a saturation front characterized by a drop in the electrical conductivity and compressibility is responsible for seismoelectric conversions. These conversions can be used, in turn, to determine the position of the front over time.
Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on
Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies
A robust method for heart sounds localization using lung sounds entropy.
Yadollahi, Azadeh; Moussavi, Zahra M K
2006-03-01
Heart sounds are the main unavoidable interference in lung sound recording and analysis. Hence, several techniques have been developed to reduce or cancel heart sounds (HS) from lung sound records. The first step in most HS cancellation techniques is to detect the segments including HS. This paper proposes a novel method for HS localization using entropy of the lung sounds. We investigated both Shannon and Renyi entropies and the results of the method using Shannon entropy were superior. Another HS localization method based on multiresolution product of lung sounds wavelet coefficients adopted from was also implemented for comparison. The methods were tested on data from 6 healthy subjects recorded at low (7.5 ml/s/kg) and medium 115 ml/s/kg) flow rates. The error of entropy-based method using Shannon entropy was found to be 0.1 +/- 0.4% and 1.0 +/- 0.7% at low and medium flow rates, respectively, which is significantly lower than that of multiresolution product method and those of other methods reported in previous studies. The proposed method is fully automated and detects HS included segments in a completely unsupervised manner. PMID:16532776
NASA Astrophysics Data System (ADS)
Oh, T.
2014-12-01
Typical studies on natural resources from a social science perspective tend to choose one type of resource—water, for example— and ask what factors contribute to the sustainable use or wasteful exploitation of that resource. However, climate change and economic development, which are causing increased pressure on local resources and presenting communities with increased levels of tradeoffs and potential conflicts, force us to consider the trade-offs between options for using a particular resource. Therefore, the transdisciplinary approach that accurately captures the advantages and disadvantages of various possible resource uses is particularly important in the complex social-ecological systems, where concerns about inequality with respect to resource use and access have become unavoidable. Needless to say, resource management and policy require sound scientific understanding of the complex interconnections between nature and society, however, in contrast to typical international discussions, I discuss Japan not as an "advanced" case where various dilemmas have been successfully addressed by the government through the optimal use of technology, but rather as a nation seeing an emerging trend that is based on a awareness of the connections between local resources and the environment. Furthermore, from a historical viewpoint, the nexus of local resources is not a brand-new idea in the experience of environmental governance in Japan. There exist the local environment movements, which emphasized the interconnection of local resources and succeeded in urging the governmental action and policymaking. For this reason, local movements and local knowledge for the resource governance warrant attention. This study focuses on the historical cases relevant to water resource management including groundwater, and considers the contexts and conditions to holistically address local resource problems, paying particular attention to interactions between science and society. I
NASA Astrophysics Data System (ADS)
Sladek, J.; Sladek, V.; Zhang, Ch.
2008-02-01
A meshless local Petrov-Galerkin (MLPG) formulation is presented for analysis of shear deformable shallow shells with orthotropic material properties and continuously varying material properties through the shell thickness. Shear deformation of shells described by the Reissner theory is considered. Analyses of shells under static and dynamic loads are given here. For transient elastodynamic case the Laplace-transform is used to eliminate the time dependence of the field variables. A weak formulation with a unit test function transforms the set of the governing equations into local integral equations on local subdomains in the plane domain of the shell. The meshless approximation based on the Moving Least-Squares (MLS) method is employed for the implementation.
Zhang, Yachao; Yang, Yang; Jiang, Hong
2013-12-12
The 3d-4f exchange interaction plays an important role in many lanthanide based molecular magnetic materials such as single-molecule magnets and magnetic refrigerants. In this work, we study the 3d-4f magnetic exchange interactions in a series of Cu(II)-Gd(III) (3d(9)-4f(7)) dinuclear complexes based on the numerical atomic basis-norm-conserving pseudopotential method and density functional theory plus the Hubbard U correction approach (DFT+U). We obtain improved description of the 4f electrons by including the semicore 5s5p states in the valence part of the Gd-pseudopotential. The Hubbard U correction is employed to treat the strongly correlated Cu-3d and Gd-4f electrons, which significantly improve the agreement of the predicted exchange constants, J, with experiment, indicating the importance of accurate description of the local Coulomb correlation. The high efficiency of the DFT+U approach enables us to perform calculations with molecular crystals, which in general improve the agreement between theory and experiment, achieving a mean absolute error smaller than 2 cm(-1). In addition, through analyzing the physical effects of U, we identify two magnetic exchange pathways. One is ferromagnetic and involves an interaction between the Cu-3d, O-2p (bridge ligand), and the majority-spin Gd-5d orbitals. The other one is antiferromagnetic and involves Cu-3d, O-2p, and the empty minority-spin Gd-4f orbitals, which is suppressed by the planar Cu-O-O-Gd structure. This study demonstrates the accuracy of the DFT+U method for evaluating the 3d-4f exchange interactions, provides a better understanding of the exchange mechanism in the Cu(II)-Gd(III) complexes, and paves the way for exploiting the magnetic properties of the 3d-4f compounds containing lanthanides other than Gd. PMID:24274078
NASA Astrophysics Data System (ADS)
Hosseini, Vahid Reza; Shivanian, Elyas; Chen, Wen
2016-05-01
The purpose of the current investigation is to determine numerical solution of time-fractional diffusion-wave equation with damping for Caputo's fractional derivative of order α (1 < α ≤ 2). A meshless local radial point interpolation (MLRPI) scheme based on Galerkin weak form is analyzed. The reason of choosing MLRPI approach is that it does not require any background integrations cells, instead integrations are implemented over local quadrature domains which are further simplified for reducing the complication of computation using regular and simple shape. The unconditional stability and convergence with order O (τ 6 - 2 α) are proved, where τ is time stepping. Also, several numerical experiments are illustrated to verify theoretical analysis.
Tattoli, F.; Casavola, C.; Pierron, F.; Rotinat, R.; Pappalettere, C.
2011-01-17
One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto--plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.
Sauer, Ursula G; Hill, Erin H; Curren, Rodger D; Raabe, Hans A; Kolle, Susanne N; Teubner, Wera; Mehling, Annette; Landsiedel, Robert
2016-07-01
In general, no single non-animal method can cover the complexity of any given animal test. Therefore, fixed sets of in vitro (and in chemico) methods have been combined into testing strategies for skin and eye irritation and skin sensitisation testing, with pre-defined prediction models for substance classification. Many of these methods have been adopted as OECD test guidelines. Various testing strategies have been successfully validated in extensive in-house and inter-laboratory studies, but they have not yet received formal acceptance for substance classification. Therefore, under the European REACH Regulation, data from testing strategies can, in general, only be used in so-called weight-of-evidence approaches. While animal testing data generated under the specific REACH information requirements are per se sufficient, the sufficiency of weight-of-evidence approaches can be questioned under the REACH system, and further animal testing can be required. This constitutes an imbalance between the regulatory acceptance of data from approved non-animal methods and animal tests that is not justified on scientific grounds. To ensure that testing strategies for local tolerance testing truly serve to replace animal testing for the REACH registration 2018 deadline (when the majority of existing chemicals have to be registered), clarity on their regulatory acceptance as complete replacements is urgently required. PMID:27494627
A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-01-01
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972
A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-01-01
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972
Evaluating a physician leadership development program - a mixed methods approach.
Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo
2016-05-16
Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study. PMID:27119393
Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G
2015-10-01
Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969
Cumulative Risk Assessment Toolbox: Methods and Approaches for the Practitioner
MacDonell, Margaret M.; Haroun, Lynne A.; Teuschler, Linda K.; Rice, Glenn E.; Hertzberg, Richard C.; Butler, James P.; Chang, Young-Soo; Clark, Shanna L.; Johns, Alan P.; Perry, Camarie S.; Garcia, Shannon S.; Jacobi, John H.; Scofield, Marcienne A.
2013-01-01
The historical approach to assessing health risks of environmental chemicals has been to evaluate them one at a time. In fact, we are exposed every day to a wide variety of chemicals and are increasingly aware of potential health implications. Although considerable progress has been made in the science underlying risk assessments for real-world exposures, implementation has lagged because many practitioners are unaware of methods and tools available to support these analyses. To address this issue, the US Environmental Protection Agency developed a toolbox of cumulative risk resources for contaminated sites, as part of a resource document that was published in 2007. This paper highlights information for nearly 80 resources from the toolbox and provides selected updates, with practical notes for cumulative risk applications. Resources are organized according to the main elements of the assessment process: (1) planning, scoping, and problem formulation; (2) environmental fate and transport; (3) exposure analysis extending to human factors; (4) toxicity analysis; and (5) risk and uncertainty characterization, including presentation of results. In addition to providing online access, plans for the toolbox include addressing nonchemical stressors and applications beyond contaminated sites and further strengthening resource accessibility to support evolving analyses for cumulative risk and sustainable communities. PMID:23762048
High Explosive Verification and Validation: Systematic and Methodical Approach
NASA Astrophysics Data System (ADS)
Scovel, Christina; Menikoff, Ralph
2011-06-01
Verification and validation of high explosive (HE) models does not fit the standard mold for several reasons. First, there are no non-trivial test problems with analytic solutions. Second, an HE model depends on a burn rate and the equation of states (EOS) of both the reactants and products. Third, there is a wide range of detonation phenomena from initiation under various stimuli to propagation of curved detonation fronts with non-rigid confining materials. Fourth, in contrast to a shock wave in a non-reactive material, the reaction-zone width is physically significant and affects the behavior of a detonation wave. Because of theses issues, a systematic and methodical approach to HE V & V is needed. Our plan is to build a test suite from the ground up. We have started with the cylinder test and have run simulations with several EOS models and burn models. We have compared with data and cross-compared the different runs to check on the sensitivity to model parameters. A related issue for V & V is what experimental data are available for calibrating and testing models. For this purpose we have started a WEB based high explosive database (HED). The current status of HED will be discussed.
Experimental Validation of Normalized Uniform Load Surface Curvature Method for Damage Localization
Jung, Ho-Yeon; Sung, Seung-Hoon; Jung, Hyung-Jo
2015-01-01
In this study, we experimentally validated the normalized uniform load surface (NULS) curvature method, which has been developed recently to assess damage localization in beam-type structures. The normalization technique allows for the accurate assessment of damage localization with greater sensitivity irrespective of the damage location. In this study, damage to a simply supported beam was numerically and experimentally investigated on the basis of the changes in the NULS curvatures, which were estimated from the modal flexibility matrices obtained from the acceleration responses under an ambient excitation. Two damage scenarios were considered for the single damage case as well as the multiple damages case by reducing the bending stiffness (EI) of the affected element(s). Numerical simulations were performed using MATLAB as a preliminary step. During the validation experiments, a series of tests were performed. It was found that the damage locations could be identified successfully without any false-positive or false-negative detections using the proposed method. For comparison, the damage detection performances were compared with those of two other well-known methods based on the modal flexibility matrix, namely, the uniform load surface (ULS) method and the ULS curvature method. It was confirmed that the proposed method is more effective for investigating the damage locations of simply supported beams than the two conventional methods in terms of sensitivity to damage under measurement noise. PMID:26501286
An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations
NASA Astrophysics Data System (ADS)
Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.
2016-08-01
In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.
Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches
Delerce, Sylvain; Dorado, Hugo; Grillon, Alexandre; Rebolledo, Maria Camila; Prager, Steven D.; Patiño, Victor Hugo; Garcés Varón, Gabriel; Jiménez, Daniel
2016-01-01
Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of data mining for
Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches.
Delerce, Sylvain; Dorado, Hugo; Grillon, Alexandre; Rebolledo, Maria Camila; Prager, Steven D; Patiño, Victor Hugo; Garcés Varón, Gabriel; Jiménez, Daniel
2016-01-01
Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of data mining for
Autonomous pallet localization and picking for industrial forklifts: a robust range and look method
NASA Astrophysics Data System (ADS)
Baglivo, L.; Biasi, N.; Biral, F.; Bellomo, N.; Bertolazzi, E.; Da Lio, M.; De Cecco, M.
2011-08-01
A combined double-sensor architecture, laser and camera, and a new algorithm named RLPF are presented as a solution to the problem of identifying and localizing a pallet, the position and angle of which are a priori known with large uncertainty. Solving this task for autonomous robot forklifts is of great value for logistics industry. The state-of-the-art is described to show how our approach overcomes the limitations of using either laser ranging or vision. An extensive experimental campaign and uncertainty analysis are presented. For the docking task, new dynamic nonlinear path planning which takes into account vehicle dynamics is proposed.
NASA Astrophysics Data System (ADS)
Khemani, Vedika; Pollmann, Frank; Sondhi, S. L.
2016-06-01
The eigenstates of many-body localized (MBL) Hamiltonians exhibit low entanglement. We adapt the highly successful density-matrix renormalization group method, which is usually used to find modestly entangled ground states of local Hamiltonians, to find individual highly excited eigenstates of MBL Hamiltonians. The adaptation builds on the distinctive spatial structure of such eigenstates. We benchmark our method against the well-studied random field Heisenberg model in one dimension. At moderate to large disorder, the method successfully obtains excited eigenstates with high accuracy, thereby enabling a study of MBL systems at much larger system sizes than those accessible to exact-diagonalization methods.
Sonic-box method employing local Mach number for oscillating wings with thickness
NASA Technical Reports Server (NTRS)
Ruo, S. Y.
1978-01-01
A computer program was developed to account approximately for the effects of finite wing thickness in the transonic potential flow over an oscillating wing of finite span. The program is based on the original sonic-box program for planar wing which was previously extended to include the effects of the swept trailing edge and the thickness of the wing. Account for the nonuniform flow caused by finite thickness is made by application of the local linearization concept. The thickness effect, expressed in terms of the local Mach number, is included in the basic solution to replace the coordinate transformation method used in the earlier work. Calculations were made for a delta wing and a rectangular wing performing plunge and pitch oscillations, and the results were compared with those obtained from other methods. An input quide and a complete listing of the computer code are presented.
Novel Methods of Intraoperative Localization and Margin Assessment of Pulmonary Nodules.
Keating, Jane; Singhal, Sunil
2016-01-01
Lung cancer screening has lead to frequent diagnosis of solitary pulmonary nodules, many of which require surgical biopsy for diagnosis and intervention. Subcentimeter and central nodules are particularly difficult to visualize or palpate during surgery, thus nodule localization can be a difficult problem for the thoracic surgeon. Although minimally invasive techniques including transthoracic computed tomography and bronchoscopic-guided biopsy may establish a diagnosis, these methods do not help locate nodules during surgery and can lead to inadequate tissue sampling. Therefore, surgical biopsy is often required for diagnosis and management of solitary pulmonary nodules. Additionally, after an excision, intraoperative margin assessment is important to prevent local recurrence. This is important for bronchial margins following lobectomy or parenchymal margins following sublobar resection. First, we examine methods of preoperative lesion marking, including wire placement, dye marking, ultrasound, fluoroscopy, and molecular imaging. Second, we describe the current state of the art in intraoperative margin assessment techniques. PMID:27568150
A reliable acoustic path: Physical properties and a source localization method
NASA Astrophysics Data System (ADS)
Duan, Rui; Yang, Kun-De; Ma, Yuan-Liang; Lei, Bo
2012-12-01
The physical properties of a reliable acoustic path (RAP) are analysed and subsequently a weighted-subspace-fitting matched field (WSF-MF) method for passive localization is presented by exploiting the properties of the RAP environment. The RAP is an important acoustic duct in the deep ocean, which occurs when the receiver is placed near the bottom where the sound velocity exceeds the maximum sound velocity in the vicinity of the surface. It is found that in the RAP environment the transmission loss is rather low and no blind zone of surveillance exists in a medium range. The ray theory is used to explain these phenomena. Furthermore, the analysis of the arrival structures shows that the source localization method based on arrival angle is feasible in this environment. However, the conventional methods suffer from the complicated and inaccurate estimation of the arrival angle. In this paper, a straightforward WSF-MF method is derived to exploit the information about the arrival angles indirectly. The method is to minimize the distance between the signal subspace and the spanned space by the array manifold in a finite range-depth space rather than the arrival-angle space. Simulations are performed to demonstrate the features of the method, and the results are explained by the arrival structures in the RAP environment.
The local properties of ocean surface waves by the phase-time method
NASA Technical Reports Server (NTRS)
Huang, Norden E.; Long, Steven R.; Tung, Chi-Chao; Donelan, Mark A.; Yuan, Yeli; Lai, Ronald J.
1992-01-01
A new approach using phase information to view and study the properties of frequency modulation, wave group structures, and wave breaking is presented. The method is applied to ocean wave time series data and a new type of wave group (containing the large 'rogue' waves) is identified. The method also has the capability of broad applications in the analysis of time series data in general.
Veselov, E I
2011-01-01
The article deals with specifying systemic approach to ecologic safety of objects with radiation jeopardy. The authors presented stages of work and algorithm of decisions on preserving reliability of storage for radiation jeopardy waste. Findings are that providing ecologic safety can cover 3 approaches: complete exemption of radiation jeopardy waste, removal of more dangerous waste from present buildings and increasing reliability of prolonged localization of radiation jeopardy waste at the initial place. The systemic approach presented could be realized at various radiation jeopardy objects. PMID:21774123
Supervisor Localization: A Top-Down Approach to Distributed Control of Discrete-Event Systems
NASA Astrophysics Data System (ADS)
Cai, K.; Wonham, W. M.
2009-03-01
A purely distributed control paradigm is proposed for discrete-event systems (DES). In contrast to control by one or more external supervisors, distributed control aims to design built-in strategies for individual agents. First a distributed optimal nonblocking control problem is formulated. To solve it, a top-down localization procedure is developed which systematically decomposes an external supervisor into local controllers while preserving optimality and nonblockingness. An efficient localization algorithm is provided to carry out the computation, and an automated guided vehicles (AGV) example presented for illustration. Finally, the 'easiest' and 'hardest' boundary cases of localization are discussed.
Method to repair localized amplitude defects in a EUV lithography mask blank
Stearns, Daniel G.; Sweeney, Donald W.; Mirkarimi, Paul B.; Chapman, Henry N.
2005-11-22
A method and apparatus are provided for the repair of an amplitude defect in a multilayer coating. A significant number of layers underneath the amplitude defect are undamaged. The repair technique restores the local reflectivity of the coating by physically removing the defect and leaving a wide, shallow crater that exposes the underlying intact layers. The particle, pit or scratch is first removed the remaining damaged region is etched away without disturbing the intact underlying layers.
A method for determining the local magnetic induction near the cut edge of the ferromagnetic strip
NASA Astrophysics Data System (ADS)
Gmyrek, Z.
2016-05-01
The paper deals with the problem of precise determination of the local magnetic induction. The author proposes a new way of doing the measurements using the classical needle probe method. The proceeding algorithm combined with the proposed approximation of the ΔU voltage drop, contributes to a significant increase in the accuracy of the determination of the magnetic induction distribution in the zone near the cut edge.
Adaptive meshless local maximum-entropy finite element method for convection-diffusion problems
NASA Astrophysics Data System (ADS)
Wu, C. T.; Young, D. L.; Hong, H. K.
2014-01-01
In this paper, a meshless local maximum-entropy finite element method (LME-FEM) is proposed to solve 1D Poisson equation and steady state convection-diffusion problems at various Peclet numbers in both 1D and 2D. By using local maximum-entropy (LME) approximation scheme to construct the element shape functions in the formulation of finite element method (FEM), additional nodes can be introduced within element without any mesh refinement to increase the accuracy of numerical approximation of unknown function, which procedure is similar to conventional p-refinement but without increasing the element connectivity to avoid the high conditioning matrix. The resulted LME-FEM preserves several significant characteristics of conventional FEM such as Kronecker-delta property on element vertices, partition of unity of shape function and exact reproduction of constant and linear functions. Furthermore, according to the essential properties of LME approximation scheme, nodes can be introduced in an arbitrary way and the continuity of the shape function along element edge is kept at the same time. No transition element is needed to connect elements of different orders. The property of arbitrary local refinement makes LME-FEM be a numerical method that can adaptively solve the numerical solutions of various problems where troublesome local mesh refinement is in general necessary to obtain reasonable solutions. Several numerical examples with dramatically varying solutions are presented to test the capability of the current method. The numerical results show that LME-FEM can obtain much better and stable solutions than conventional FEM with linear element.
Digital Sequences and a Time Reversal-Based Impact Region Imaging and Localization Method
Qiu, Lei; Yuan, Shenfang; Mei, Hanfei; Qian, Weifeng
2013-01-01
To reduce time and cost of damage inspection, on-line impact monitoring of aircraft composite structures is needed. A digital monitor based on an array of piezoelectric transducers (PZTs) is developed to record the impact region of impacts on-line. It is small in size, lightweight and has low power consumption, but there are two problems with the impact alarm region localization method of the digital monitor at the current stage. The first one is that the accuracy rate of the impact alarm region localization is low, especially on complex composite structures. The second problem is that the area of impact alarm region is large when a large scale structure is monitored and the number of PZTs is limited which increases the time and cost of damage inspections. To solve the two problems, an impact alarm region imaging and localization method based on digital sequences and time reversal is proposed. In this method, the frequency band of impact response signals is estimated based on the digital sequences first. Then, characteristic signals of impact response signals are constructed by sinusoidal modulation signals. Finally, the phase synthesis time reversal impact imaging method is adopted to obtain the impact region image. Depending on the image, an error ellipse is generated to give out the final impact alarm region. A validation experiment is implemented on a complex composite wing box of a real aircraft. The validation results show that the accuracy rate of impact alarm region localization is approximately 100%. The area of impact alarm region can be reduced and the number of PZTs needed to cover the same impact monitoring region is reduced by more than a half. PMID:24084123
Linear-scaling explicitly correlated treatment of solids: Periodic local MP2-F12 method
Usvyat, Denis
2013-11-21
Theory and implementation of the periodic local MP2-F12 method in the 3*A fixed-amplitude ansatz is presented. The method is formulated in the direct space, employing local representation for the occupied, virtual, and auxiliary orbitals in the form of Wannier functions (WFs), projected atomic orbitals (PAOs), and atom-centered Gaussian-type orbitals, respectively. Local approximations are introduced, restricting the list of the explicitly correlated pairs, as well as occupied, virtual, and auxiliary spaces in the strong orthogonality projector to the pair-specific domains on the basis of spatial proximity of respective orbitals. The 4-index two-electron integrals appearing in the formalism are approximated via the direct-space density fitting technique. In this procedure, the fitting orbital spaces are also restricted to local fit-domains surrounding the fitted densities. The formulation of the method and its implementation exploits the translational symmetry and the site-group symmetries of the WFs. Test calculations are performed on LiH crystal. The results show that the periodic LMP2-F12 method substantially accelerates basis set convergence of the total correlation energy, and even more so the correlation energy differences. The resulting energies are quite insensitive to the resolution-of-the-identity domain sizes and the quality of the auxiliary basis sets. The convergence with the orbital domain size is somewhat slower, but still acceptable. Moreover, inclusion of slightly more diffuse functions, than those usually used in the periodic calculations, improves the convergence of the LMP2-F12 correlation energy with respect to both the size of the PAO-domains and the quality of the orbital basis set. At the same time, the essentially diffuse atomic orbitals from standard molecular basis sets, commonly utilized in molecular MP2-F12 calculations, but problematic in the periodic context, are not necessary for LMP2-F12 treatment of crystals.
Digital sequences and a time reversal-based impact region imaging and localization method.
Qiu, Lei; Yuan, Shenfang; Mei, Hanfei; Qian, Weifeng
2013-01-01
To reduce time and cost of damage inspection, on-line impact monitoring of aircraft composite structures is needed. A digital monitor based on an array of piezoelectric transducers (PZTs) is developed to record the impact region of impacts on-line. It is small in size, lightweight and has low power consumption, but there are two problems with the impact alarm region localization method of the digital monitor at the current stage. The first one is that the accuracy rate of the impact alarm region localization is low, especially on complex composite structures. The second problem is that the area of impact alarm region is large when a large scale structure is monitored and the number of PZTs is limited which increases the time and cost of damage inspections. To solve the two problems, an impact alarm region imaging and localization method based on digital sequences and time reversal is proposed. In this method, the frequency band of impact response signals is estimated based on the digital sequences first. Then, characteristic signals of impact response signals are constructed by sinusoidal modulation signals. Finally, the phase synthesis time reversal impact imaging method is adopted to obtain the impact region image. Depending on the image, an error ellipse is generated to give out the final impact alarm region. A validation experiment is implemented on a complex composite wing box of a real aircraft. The validation results show that the accuracy rate of impact alarm region localization is approximately 100%. The area of impact alarm region can be reduced and the number of PZTs needed to cover the same impact monitoring region is reduced by more than a half. PMID:24084123
Feasibility of A-mode ultrasound attenuation as a monitoring method of local hyperthermia treatment.
Manaf, Noraida Abd; Aziz, Maizatul Nadwa Che; Ridzuan, Dzulfadhli Saffuan; Mohamad Salim, Maheza Irna; Wahab, Asnida Abd; Lai, Khin Wee; Hum, Yan Chai
2016-06-01
Recently, there is an increasing interest in the use of local hyperthermia treatment for a variety of clinical applications. The desired therapeutic outcome in local hyperthermia treatment is achieved by raising the local temperature to surpass the tissue coagulation threshold, resulting in tissue necrosis. In oncology, local hyperthermia is used as an effective way to destroy cancerous tissues and is said to have the potential to replace conventional treatment regime like surgery, chemotherapy or radiotherapy. However, the inability to closely monitor temperature elevations from hyperthermia treatment in real time with high accuracy continues to limit its clinical applicability. Local hyperthermia treatment requires real-time monitoring system to observe the progression of the destroyed tissue during and after the treatment. Ultrasound is one of the modalities that have great potential for local hyperthermia monitoring, as it is non-ionizing, convenient and has relatively simple signal processing requirement compared to magnetic resonance imaging and computed tomography. In a two-dimensional ultrasound imaging system, changes in tissue microstructure during local hyperthermia treatment are observed in terms of pixel value analysis extracted from the ultrasound image itself. Although 2D ultrasound has shown to be the most widely used system for monitoring hyperthermia in ultrasound imaging family, 1D ultrasound on the other hand could offer a real-time monitoring and the method enables quantitative measurement to be conducted faster and with simpler measurement instrument. Therefore, this paper proposes a new local hyperthermia monitoring method that is based on one-dimensional ultrasound. Specifically, the study investigates the effect of ultrasound attenuation in normal and pathological breast tissue when the temperature in tissue is varied between 37 and 65 °C during local hyperthermia treatment. Besides that, the total protein content measurement was also
Miskell, Georgia; Salmond, Jennifer; Longley, Ian; Dirks, Kim N
2015-08-01
Differences in urban design features may affect emission and dispersion patterns of air pollution at local-scales within cities. However, the complexity of urban forms, interdependence of variables, and temporal and spatial variability of processes make it difficult to quantify determinants of local-scale air pollution. This paper uses a combination of dense measurements and a novel approach to land-use regression (LUR) modeling to identify key controls on concentrations of ambient nitrogen dioxide (NO2) at a local-scale within a central business district (CBD). Sixty-two locations were measured over 44 days in Auckland, New Zealand at high density (study area 0.15 km(2)). A local-scale LUR model was developed, with seven variables identified as determinants based on standard model criteria. A novel method for improving standard LUR design was developed using two independent data sets (at local and "city" scales) to generate improved accuracy in predictions and greater confidence in results. This revised multiscale LUR model identified three urban design variables (intersection, proximity to a bus stop, and street width) as having the more significant determination on local-scale air quality, and had improved adaptability between data sets. PMID:26151151
Source localization of turboshaft engine broadband noise using a three-sensor coherence method
NASA Astrophysics Data System (ADS)
Blacodon, Daniel; Lewy, Serge
2015-03-01
Turboshaft engines can become the main source of helicopter noise at takeoff. Inlet radiation mainly comes from the compressor tones, but aft radiation is more intricate: turbine tones usually are above the audible frequency range and do not contribute to the weighted sound levels; jet is secondary and radiates low noise levels. A broadband component is the most annoying but its sources are not well known (it is called internal or core noise). Present study was made in the framework of the European project TEENI (Turboshaft Engine Exhaust Noise Identification). Its main objective was to localize the broadband sources in order to better reduce them. Several diagnostic techniques were implemented by the various TEENI partners. As regards ONERA, a first attempt at separating sources was made in the past with Turbomeca using a three-signal coherence method (TSM) to reject background non-acoustic noise. The main difficulty when using TSM is the assessment of the frequency range where the results are valid. This drawback has been circumvented in the TSM implemented in TEENI. Measurements were made on a highly instrumented Ardiden turboshaft engine in the Turbomeca open-air test bench. Two engine powers (approach and takeoff) were selected to apply TSM. Two internal pressure probes were located in various cross-sections, either behind the combustion chamber (CC), the high-pressure turbine (HPT), the free-turbine first stage (TL), or in four nozzle sections. The third transducer was a far-field microphone located around the maximum of radiation, at 120° from the intake centerline. The key result is that coherence increases from CC to HPT and TL, then decreases in the nozzle up to the exit. Pressure fluctuations from HPT and TL are very coherent with the far-field acoustic spectra up to 700 Hz. They are thus the main acoustic source and can be attributed to indirect combustion noise (accuracy decreases above 700 Hz because coherence is lower, but far-field sound spectra
Geodetic methods for detecting volcanic unrest: a theoretical approach
NASA Astrophysics Data System (ADS)
Fernández, José; Carrasco, José M.; Rundle, John B.; Araña, Vicente
In this paper we study the application of different geodetic techniques to volcanic activity monitoring, using theoretical analysis. This methodology is very useful for obtaining an idea of the most appropriate (and efficient) monitoring method, mainly when there are no records of geodetic changes previous to volcanic activity. The analysis takes into account the crustal structure of the area, its geology, and its known volcanic activity to estimate the deformation and gravity changes that might precede eruptions. The deformation model used includes the existing gravity field and vertical changes in the crustal properties. Both factors can have a considerable effect on computed deformation and gravity changes. Topography should be considered when there is a steep slope (greater than 10°). The case study of Teide stratovolcano (Tenerife, Canary Islands), for which no deformation or gravity changes are available, is used as a test. To avoid considering topography, we worked at the lowest level of Las Cañadas and examined a smaller area than the whole caldera or island. Therefore, the results are only a first approach to the most adequate geodetic monitoring system. The methodology can also be applied to active areas where volcanic risk is not associated with a stratovolcano but instead with monogenetic scattered centers, especially when sites must be chosen in terms of detection efficiency or existing facilities. The Canary Islands provide a good example of this type of active volcanic areas, and we apply our model to the island of Lanzarote to evaluate the efficiency of the monitoring system installed at the existing geodynamic station. On this island topography is not important. The results of our study show clearly that the most appropriate geodetic volcano monitoring system is not the same for all different volcanic zones and types, and the particular properties of each volcano/zone must be taken into account when designing each system.
AN ANALYTICAL APPROACH TO RESEARCH ON INSTRUCTIONAL METHODS.
ERIC Educational Resources Information Center
GAGE, N.L.
THE APPROACH USED AT STANFORD UNIVERSITY TO RESEARCH ON TEACHING WAS DISCUSSED, AND THE AUTHOR EXPLAINED THE CONCEPTS OF "TECHNICAL SKILLS,""MICROTEACHING," AND "MICROCRITERIA" THAT WERE THE BASIS OF THE DEVELOPMENT OF THIS APPROACH TO RESEARCH AND TO STANFORD'S SECONDARY-TEACHER EDUCATION PROGRAM. THE AUTHOR PRESENTED A BASIC DISTINCTION BETWEEN…
Hard X-ray nanoimaging method using local diffraction from metal wire
Takano, Hidekazu Konishi, Shigeki; Shimomura, Sho; Azuma, Hiroaki; Tsusaka, Yoshiyuki; Kagoshima, Yasushi
2014-01-13
A simple hard X-ray imaging method achieving a high spatial resolution is proposed. Images are obtained by scanning a metal wire through the wave field to be measured and rotating the sample to collect data for back projection calculations; the local diffraction occurring at the edges of the metal wire operates as a narrow line probe. In-line holograms of a test sample were obtained with a spatial resolution of better than 100 nm. The potential high spatial resolution of this method is shown by calculations using diffraction theory.
NASA Astrophysics Data System (ADS)
Yang, Xiao-Jun; Srivastava, H. M.; He, Ji-Huan; Baleanu, Dumitru
2013-10-01
In this Letter, we propose to use the Cantor-type cylindrical-coordinate method in order to investigate a family of local fractional differential operators on Cantor sets. Some testing examples are given to illustrate the capability of the proposed method for the heat-conduction equation on a Cantor set and the damped wave equation in fractal strings. It is seen to be a powerful tool to convert differential equations on Cantor sets from Cantorian-coordinate systems to Cantor-type cylindrical-coordinate systems.
A numerical method of tracing a vortical axis along local topological axis line
NASA Astrophysics Data System (ADS)
Nakayama, Katsuyuki; Hasegawa, Hideki
2016-06-01
A new numerical method is presented to trace or identify a vortical axis in flow, which is based on Galilean invariant flow topology. We focus on the local flow topology specified by the eigenvalues and eigenvectors of the velocity gradient tensor, and extract the axis component from its flow trajectory. Eigen-vortical-axis line is defined from the eigenvector of the real eigenvalue of the velocity gradient tensor where the tensor has the conjugate complex eigenvalues. This numerical method integrates the eigen-vortical-axis line and traces a vortical axis in terms of the invariant flow topology, which enables to investigate the feature of the topology-based vortical axis.
Stress corrosion cracking of zirconium cladding tubes: I. Proximate local SCC testing method
NASA Astrophysics Data System (ADS)
Rozhnov, A. B.; Belov, V. A.; Nikulin, S. A.; Khanzhin, V. G.
2010-10-01
The stress corrosion cracking (SCC) methods of testing zirconium cladding tubes are analyzed. A proximate method is proposed for estimating SCC of fuel claddings claddings in a iodine-containing environment with a limited contact zone between a metal and corrosive medium and simultaneous measurement of acoustic emission (AE) from forming corrosion defects. Criteria of estimating the SCC resistance of the tubes are proposed from measured AE and corrosion damage of the tube material. The results of local SCC tests of cladding tubes of E110 and E635 zirconium alloys are presented.
A Meshfree Method based on the Local Partition of Unity for Cohesive Cracks
NASA Astrophysics Data System (ADS)
Rabczuk, Timon; Zi, Goangseup
2007-05-01
We will present a meshfree method based on the local partition of unity for cohesive cracks. The cracks are described by a jump in the displacement field for particles whose domain of influence is cut by the crack. Particles with partially cut domain of influence are enriched with branch functions. Crack propagation is governed by the material stability condition. Due to the smoothness and higher order continuity, the method is very accurate which is demonstrated for several quasi static and dynamic crack propagation examples.
NASA Technical Reports Server (NTRS)
Boldman, D. R.; Schmidt, J. F.; Ehlers, R. C.
1972-01-01
An empirical modification of an existing integral energy turbulent boundary layer method is proposed in order to improve the estimates of local heat transfer in converging-diverging nozzles and consequently, provide better assessments of the total or integrated heat transfer. The method involves the use of a modified momentum-heat analogy which includes an acceleration term comprising the nozzle geometry and free stream velocity. The original and modified theories are applied to heat transfer data from previous studies which used heated air in 30 deg - 15 deg, 45 deg - 15 deg, and 60 deg - 15 deg water-cooled nozzles.
Neudorf, Cory; Fuller, Daniel; Cushon, Jennifer; Glew, Riley; Turner, Hollie; Ugolini, Cristina
2015-01-01
Background: We present the health inequalities analytic approach used by the Saskatoon Health Region to examine health equity. Our aim was to develop a method that will enable health regions to prioritize action on health inequalities. Methods: Data from admissions to hospital, physician billing, reportable diseases, vital statistics and childhood immunizations in the city of Saskatoon were analyzed for the years ranging from 1995 to 2011. Data were aggregated to the dissemination area level. The Pampalon deprivation index was used as the measure of socioeconomic status. We calculated annual rates per 1000 people for each outcome. Rate ratios, rate differences, area-level concentration curves and area-level concentration coefficients quantified inequality. An Inequalities Prioritization Matrix was developed to prioritize action for the outcomes showing the greatest inequality. The outcomes measured were cancer, intentional self-harm, chronic obstructive pulmonary disease, mental illness, heart disease, diabetes, injury, stroke, chlamydia, tuberculosis, gonorrhea, hepatitis C, high birth weight, low birth weight, teen abortion, teen pregnancy, infant mortality and all-cause mortality. Results: According to the Inequalities Prioritization Matrix, injuries and chronic obstructive pulmonary disease were the first and second priorities, respectively, that needed to be addressed related to inequalities in admissions to hospital. For physician billing, mental disorders and diabetes were high-priority areas. Differences in teen pregnancy and all-cause mortality were the most unequal in the vital statistics data. For communicable diseases, hepatitis C was the highest priority. Interpretation: Our findings show that health inequalities exist at the local level and that a method can be developed to prioritize action on these inequalities. Policies should consider health inequalities and adopt population-based and targeted actions to reduce inequalities. PMID:27022600
State-Based Curriculum-Making: Approaches to Local Curriculum Work in Norway and Finland
ERIC Educational Resources Information Center
Mølstad, Christina Elde
2015-01-01
This article investigates how state authorities in Norway and Finland design national curriculum to provide different policy conditions for local curriculum work in municipalities and schools. The topic is explored by comparing how national authorities in Norway and Finland create a scope for local curriculum. The data consist of interviews with…
A locally stabilized immersed boundary method for the compressible Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Brehm, C.; Hader, C.; Fasel, H. F.
2015-08-01
A higher-order immersed boundary method for solving the compressible Navier-Stokes equations is presented. The distinguishing feature of this new immersed boundary method is that the coefficients of the irregular finite-difference stencils in the vicinity of the immersed boundary are optimized to obtain improved numerical stability. This basic idea was introduced in a previous publication by the authors for the advection step in the projection method used to solve the incompressible Navier-Stokes equations. This paper extends the original approach to the compressible Navier-Stokes equations considering flux vector splitting schemes and viscous wall boundary conditions at the immersed geometry. In addition to the stencil optimization procedure for the convective terms, this paper discusses other key aspects of the method, such as imposing flux boundary conditions at the immersed boundary and the discretization of the viscous flux in the vicinity of the boundary. Extensive linear stability investigations of the immersed scheme confirm that a linearly stable method is obtained. The method of manufactured solutions is used to validate the expected higher-order accuracy and to study the error convergence properties of this new method. Steady and unsteady, 2D and 3D canonical test cases are used for validation of the immersed boundary approach. Finally, the method is employed to simulate the laminar to turbulent transition process of a hypersonic Mach 6 boundary layer flow over a porous wall and subsonic boundary layer flow over a three-dimensional spherical roughness element.
Zander, Katrin; Stolz, Hanna; Hamm, Ulrich
2013-03-01
Ethical consumerism is a growing trend worldwide. Ethical consumers' expectations are increasing and neither the Fairtrade nor the organic farming concept covers all the ethical concerns of consumers. Against this background the aim of this research is to elicit consumers' preferences regarding organic food with additional ethical attributes and their relevance at the market place. A mixed methods research approach was applied by combining an Information Display Matrix, Focus Group Discussions and Choice Experiments in five European countries. According to the results of the Information Display Matrix, 'higher animal welfare', 'local production' and 'fair producer prices' were preferred in all countries. These three attributes were discussed with Focus Groups in depth, using rather emotive ways of labelling. While the ranking of the attributes was the same, the emotive way of communicating these attributes was, for the most part, disliked by participants. The same attributes were then used in Choice Experiments, but with completely revised communication arguments. According to the results of the Focus Groups, the arguments were presented in a factual manner, using short and concise statements. In this research step, consumers in all countries except Austria gave priority to 'local production'. 'Higher animal welfare' and 'fair producer prices' turned out to be relevant for buying decisions only in Germany and Switzerland. According to our results, there is substantial potential for product differentiation in the organic sector through making use of production standards that exceed existing minimum regulations. The combination of different research methods in a mixed methods approach proved to be very helpful. The results of earlier research steps provided the basis from which to learn - findings could be applied in subsequent steps, and used to adjust and deepen the research design. PMID:23207189
From Object Fields to Local Variables: A Practical Approach to Field-Sensitive Analysis
NASA Astrophysics Data System (ADS)
Albert, Elvira; Arenas, Puri; Genaim, Samir; Puebla, German; Ramírez Deantes, Diana Vanessa
Static analysis which takes into account the value of data stored in the heap is typically considered complex and computationally intractable in practice. Thus, most static analyzers do not keep track of object fields (or fields for short), i.e., they are field-insensitive. In this paper, we propose locality conditions for soundly converting fields into local variables. This way, field-insensitive analysis over the transformed program can infer information on the original fields. Our notion of locality is context-sensitive and can be applied both to numeric and reference fields. We propose then a polyvariant transformation which actually converts object fields meeting the locality condition into variables and which is able to generate multiple versions of code when this leads to increasing the amount of fields which satisfy the locality conditions. We have implemented our analysis within a termination analyzer for Java bytecode.
Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun
2015-01-01
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203
Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun
2015-01-01
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203
Physics-based approach to chemical source localization using mobile robotic swarms
NASA Astrophysics Data System (ADS)
Zarzhitsky, Dimitri
2008-07-01
Recently, distributed computation has assumed a dominant role in the fields of artificial intelligence and robotics. To improve system performance, engineers are combining multiple cooperating robots into cohesive collectives called swarms. This thesis illustrates the application of basic principles of physicomimetics, or physics-based design, to swarm robotic systems. Such principles include decentralized control, short-range sensing and low power consumption. We show how the application of these principles to robotic swarms results in highly scalable, robust, and adaptive multi-robot systems. The emergence of these valuable properties can be predicted with the help of well-developed theoretical methods. In this research effort, we have designed and constructed a distributed physicomimetics system for locating sources of airborne chemical plumes. This task, called chemical plume tracing (CPT), is receiving a great deal of attention due to persistent homeland security threats. For this thesis, we have created a novel CPT algorithm called fluxotaxis that is based on theoretical principles of fluid dynamics. Analytically, we show that fluxotaxis combines the essence, as well as the strengths, of the two most popular biologically-inspired CPT methods-- chemotaxis and anemotaxis. The chemotaxis strategy consists of navigating in the direction of the chemical density gradient within the plume, while the anemotaxis approach is based on an upwind traversal of the chemical cloud. Rigorous and extensive experimental evaluations have been performed in simulated chemical plume environments. Using a suite of performance metrics that capture the salient aspects of swarm-specific behavior, we have been able to evaluate and compare the three CPT algorithms. We demonstrate the improved performance of our fluxotaxis approach over both chemotaxis and anemotaxis in these realistic simulation environments, which include obstacles. To test our understanding of CPT on actual hardware
Haarmann, Frank; Koch, Katrin; Jeglič, Peter; Pecher, Oliver; Rosner, Helge; Grin, Yuri
2011-06-27
The results of the investigation of MGa(2) with M = Ca, Sr, Ba and of MGa(4) with M = Na, Ca, Sr, Ba by a combined application of NMR spectroscopy and quantum mechanical calculations are comprehensively evaluated. The electric-field gradient (EFG) was identified as the most reliable measure to study intermetallic compounds, since it is accessible with high precision by quantum mechanical calculations and, for nuclear spin I>1/2, by NMR spectroscopy. The EFG values obtained by NMR spectroscopy and quantum mechanical calculations agree very well for both series of investigated compounds. A deconvolution of the calculated EFGs into their contributions reveals its sensitivity to the local environment of the atoms. The EFGs of the investigated di- and tetragallides are dominated by the population of the p(x)-, p(y)-, and p(z)-like states of the Ga atoms. A general combined approach for the investigation of disordered intermetallic compounds by application of diffraction methods, NMR spectroscopy, and quantum mechanical calculations is suggested. This scheme can also be applied to other classes of crystalline disordered inorganic materials. PMID:21590820
The analysis of underclad cracks in large-scale tests using the local approach to cleavage fracture
Moinereau, D.; Rousselier, G.
1997-12-01
Electricite de France has conducted a large program including experiments on large-size cladded specimens and their interpretations to evaluate different methods of fracture analysis used in French safety studies regarding the risk of fast fracture in reactor pressure vessels. Four specimens made of ferritic steel A508 C13 with stainless steel cladding, containing small artificial underclad defects, have been tested in four-point bending. Experiments have been conducted at very low temperature, and crack instability by cleavage fracture without crack arrest was obtained in base metal. The tests have been interpreted using the local approach to cleavage fracture (Weibull model) by two-dimensional finite element computations. The Weibull model parameters have been determined using axisymmetrical notched specimens. The probability of failure has been evaluated in each test using finite element analyses with varying mesh sizes. The results show an important effect of the size of the elements at the crack tip on the calculated probability of failure. Those effects are confirmed using the model of a CT specimen. Also discussed is the shallow flaw effect with the Weibull model.
Scale-adaptive tensor algebra for local many-body methods of electronic structure theory
Liakh, Dmitry I
2014-01-01
While the formalism of multiresolution analysis (MRA), based on wavelets and adaptive integral representations of operators, is actively progressing in electronic structure theory (mostly on the independent-particle level and, recently, second-order perturbation theory), the concepts of multiresolution and adaptivity can also be utilized within the traditional formulation of correlated (many-particle) theory which is based on second quantization and the corresponding (generally nonorthogonal) tensor algebra. In this paper, we present a formalism called scale-adaptive tensor algebra (SATA) which exploits an adaptive representation of tensors of many-body operators via the local adjustment of the basis set quality. Given a series of locally supported fragment bases of a progressively lower quality, we formulate the explicit rules for tensor algebra operations dealing with adaptively resolved tensor operands. The formalism suggested is expected to enhance the applicability and reliability of local correlated many-body methods of electronic structure theory, especially those directly based on atomic orbitals (or any other localized basis functions).
Adaptive local complexity controlled data hiding method considering the human visual sensitivity
NASA Astrophysics Data System (ADS)
Ho, L.-H.; Lai, S.-L.; Chung, Y.-K.
2012-12-01
This paper proposes a human visual system based data hiding method with the consideration of the local complexity in images. It is known that human vision is more sensitive to the changes in smooth area than that of complex area, we embed less data into blocks with low complexity and embed more data into blocks with rich texture. We use the modified diamond encoding (MDE) as the embedding technique, and employ a sophisticated pixel pair adjustment process to maintain the complexity consistency of blocks before and after embedding data bits. Since the proposed method is robust to LSB-based steganalysis, it is more secure than other existing methods using the LSB replacement as their embedding technique. The experimental results revealed that the proposed method not only offers a better embedding performance, but is also secure under the attack of the LSB based steganalysis tools.
NASA Astrophysics Data System (ADS)
Xiao, Fan; Chen, Zhijun; Chen, Jianguo; Zhou, Yongzhang
2016-05-01
In this study, a novel batch sliding window (BSW) based singularity mapping approach was proposed. Compared to the traditional sliding window (SW) technique with disadvantages of the empirical predetermination of a fixed maximum window size and outliers sensitivity of least-squares (LS) linear regression method, the BSW based singularity mapping approach can automatically determine the optimal size of the largest window for each estimated position, and utilizes robust linear regression (RLR) which is insensitive to outlier values. In the case study, tin geochemical data in Gejiu, Yunnan, have been processed by BSW based singularity mapping approach. The results show that the BSW approach can improve the accuracy of the calculation of singularity exponent values due to the determination of the optimal maximum window size. The utilization of RLR method in the BSW approach can smoothen the distribution of singularity index values with few or even without much high fluctuate values looking like noise points that usually make a singularity map much roughly and discontinuously. Furthermore, the student's t-statistic diagram indicates a strong spatial correlation between high geochemical anomaly and known tin polymetallic deposits. The target areas within high tin geochemical anomaly could probably have much higher potential for the exploration of new tin polymetallic deposits than other areas, particularly for the areas that show strong tin geochemical anomalies whereas no tin polymetallic deposits have been found in them.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Ito, Takaaki; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi
2014-03-01
This paper describes a universal approach to automatic segmentation of different internal organ and tissue regions in three-dimensional (3D) computerized tomography (CT) scans. The proposed approach combines object localization, a probabilistic atlas, and 3D GrabCut techniques to achieve automatic and quick segmentation. The proposed method first detects a tight 3D bounding box that contains the target organ region in CT images and then estimates the prior of each pixel inside the bounding box belonging to the organ region or background based on a dynamically generated probabilistic atlas. Finally, the target organ region is separated from the background by using an improved 3D GrabCut algorithm. A machine-learning method is used to train a detector to localize the 3D bounding box of the target organ using template matching on a selected feature space. A content-based image retrieval method is used for online generation of a patient-specific probabilistic atlas for the target organ based on a database. A 3D GrabCut algorithm is used for final organ segmentation by iteratively estimating the CT number distributions of the target organ and backgrounds using a graph-cuts algorithm. We applied this approach to localize and segment twelve major organ and tissue regions independently based on a database that includes 1300 torso CT scans. In our experiments, we randomly selected numerous CT scans and manually input nine principal types of inner organ regions for performance evaluation. Preliminary results showed the feasibility and efficiency of the proposed approach for addressing automatic organ segmentation issues on CT images.
Robust method to detect and locate local earthquakes by means of amplitude measurements.
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Brückl, Ewald
2016-04-01
In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic
NASA Astrophysics Data System (ADS)
Wu, Jian-Ying; Cervera, Miguel
2015-09-01
This work investigates systematically traction- and stress-based approaches for the modeling of strong and regularized discontinuities induced by localized failure in solids. Two complementary methodologies, i.e., discontinuities localized in an elastic solid and strain localization of an inelastic softening solid, are addressed. In the former it is assumed a priori that the discontinuity forms with a continuous stress field and along the known orientation. A traction-based failure criterion is introduced to characterize the discontinuity and the orientation is determined from Mohr's maximization postulate. If the displacement jumps are retained as independent variables, the strong/regularized discontinuity approaches follow, requiring constitutive models for both the bulk and discontinuity. Elimination of the displacement jumps at the material point level results in the embedded/smeared discontinuity approaches in which an overall inelastic constitutive model fulfilling the static constraint suffices. The second methodology is then adopted to check whether the assumed strain localization can occur and identify its consequences on the resulting approaches. The kinematic constraint guaranteeing stress boundedness and continuity upon strain localization is established for general inelastic softening solids. Application to a unified stress-based elastoplastic damage model naturally yields all the ingredients of a localized model for the discontinuity (band), justifying the first methodology. Two dual but not necessarily equivalent approaches, i.e., the traction-based elastoplastic damage model and the stress-based projected discontinuity model, are identified. The former is equivalent to the embedded and smeared discontinuity approaches, whereas in the later the discontinuity orientation and associated failure criterion are determined consistently from the kinematic constraint rather than given a priori. The bi-directional connections and equivalence conditions
Żurek-Biesiada, Dominika; Szczurek, Aleksander T; Prakash, Kirti; Mohana, Giriram K; Lee, Hyun-Keun; Roignant, Jean-Yves; Birk, Udo J; Dobrucki, Jurek W; Cremer, Christoph
2016-05-01
Higher order chromatin structure is not only required to compact and spatially arrange long chromatids within a nucleus, but have also important functional roles, including control of gene expression and DNA processing. However, studies of chromatin nanostructures cannot be performed using conventional widefield and confocal microscopy because of the limited optical resolution. Various methods of superresolution microscopy have been described to overcome this difficulty, like structured illumination and single molecule localization microscopy. We report here that the standard DNA dye Vybrant(®) DyeCycle™ Violet can be used to provide single molecule localization microscopy (SMLM) images of DNA in nuclei of fixed mammalian cells. This SMLM method enabled optical isolation and localization of large numbers of DNA-bound molecules, usually in excess of 10(6) signals in one cell nucleus. The technique yielded high-quality images of nuclear DNA density, revealing subdiffraction chromatin structures of the size in the order of 100nm; the interchromatin compartment was visualized at unprecedented optical resolution. The approach offers several advantages over previously described high resolution DNA imaging methods, including high specificity, an ability to record images using a single wavelength excitation, and a higher density of single molecule signals than reported in previous SMLM studies. The method is compatible with DNA/multicolor SMLM imaging which employs simple staining methods suited also for conventional optical microscopy. PMID:26341267
NASA Technical Reports Server (NTRS)
Tian, Jialin; Madaras, Eric I.
2009-01-01
The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.
Valuation of IT Courses--A Contingent Valuation Method Approach
ERIC Educational Resources Information Center
Liao, Chao-ning; Chiang, LiChun
2008-01-01
To help the civil servants in both central and local governments in Taiwan operating administrative works smoothly under a new digitalized system launched in 1994, a series of courses related to information technology were offered free to them annually by the central governments. However, due to the budget deficit in recent years, the government…
Sustainable Development Index in Hong Kong: Approach, Method and Findings
ERIC Educational Resources Information Center
Tso, Geoffrey K. F.; Yau, Kelvin K. W.; Yang, C. Y.
2011-01-01
Sustainable development is a priority area of research in many countries and regions nowadays. This paper illustrates how a multi-stakeholders engagement process can be applied to identify and prioritize the local community's concerns and issues regarding sustainable development in Hong Kong. Ten priority areas covering a wide range of community's…
NASA Astrophysics Data System (ADS)
DePrince, A. Eugene; Mazziotti, David A.
2010-01-01
The parametric variational two-electron reduced-density-matrix (2-RDM) method is applied to computing electronic correlation energies of medium-to-large molecular systems by exploiting the spatial locality of electron correlation within the framework of the cluster-in-molecule (CIM) approximation [S. Li et al., J. Comput. Chem. 23, 238 (2002); J. Chem. Phys. 125, 074109 (2006)]. The 2-RDMs of individual molecular fragments within a molecule are determined, and selected portions of these 2-RDMs are recombined to yield an accurate approximation to the correlation energy of the entire molecule. In addition to extending CIM to the parametric 2-RDM method, we (i) suggest a more systematic selection of atomic-orbital domains than that presented in previous CIM studies and (ii) generalize the CIM method for open-shell quantum systems. The resulting method is tested with a series of polyacetylene molecules, water clusters, and diazobenzene derivatives in minimal and nonminimal basis sets. Calculations show that the computational cost of the method scales linearly with system size. We also compute hydrogen-abstraction energies for a series of hydroxyurea derivatives. Abstraction of hydrogen from hydroxyurea is thought to be a key step in its treatment of sickle cell anemia; the design of hydroxyurea derivatives that oxidize more rapidly is one approach to devising more effective treatments.
Identification method of satellite local components based on combined feature metrics
NASA Astrophysics Data System (ADS)
Zhi, Xi-yang; Hou, Qing-yu; Zhang, Wei; Sun, Xuan
2014-11-01
In order to meet the requirements of identification of satellite local targets, a new method based on combined feature metrics is proposed. Firstly, the geometric features of satellite local targets including body, solar panel and antenna are analyzed respectively, and then the cluster of each component are constructed based on the combined feature metrics of mathematical morphology. Then the corresponding fractal clustering criterions are given. A cluster model is established, which determines the component classification according to weighted combination of the fractal geometric features. On this basis, the identified targets in the satellite image can be recognized by computing the matching probabilities between the identified targets and the clustered ones, which are weighted combinations of the component fractal feature metrics defined in the model. Moreover, the weights are iteratively selected through particle swarm optimization to promote recognition accuracy. Finally, the performance of the identification algorithm is analyzed and verified. Experimental results indicate that the algorithm is able to identify the satellite body, solar panel and antenna accurately with identification probability up to 95%, and has high computing efficiency. The proposed method can be applied to identify on-orbit satellite local targets and possesses potential application prospects on spatial target detection and identification.
A theory for protein dynamics: Global anisotropy and a normal mode approach to local complexity
NASA Astrophysics Data System (ADS)
Copperman, Jeremy; Romano, Pablo; Guenza, Marina
2014-03-01
We propose a novel Langevin equation description for the dynamics of biological macromolecules by projecting the solvent and all atomic degrees of freedom onto a set of coarse-grained sites at the single residue level. We utilize a multi-scale approach where molecular dynamic simulations are performed to obtain equilibrium structural correlations input to a modified Rouse-Zimm description which can be solved analytically. The normal mode solution provides a minimal basis set to account for important properties of biological polymers such as the anisotropic global structure, and internal motion on a complex free-energy surface. This multi-scale modeling method predicts the dynamics of both global rotational diffusion and constrained internal motion from the picosecond to the nanosecond regime, and is quantitative when compared to both simulation trajectory and NMR relaxation times. Utilizing non-equilibrium sampling techniques and an explicit treatment of the free-energy barriers in the mode coordinates, the model is extended to include biologically important fluctuations in the microsecond regime, such as bubble and fork formation in nucleic acids, and protein domain motion. This work supported by the NSF under the Graduate STEM Fellows in K-12 Education (GK-12) program, grant DGE-0742540 and NSF grant DMR-0804145, computational support from XSEDE and ACISS.
Mueller, Jenna L; Fu, Henry L; Mito, Jeffrey K; Whitley, Melodi J; Chitalia, Rhea; Erkanli, Alaattin; Dodd, Leslie; Cardona, Diana M; Geradts, Joseph; Willett, Rebecca M; Kirsch, David G; Ramanujam, Nimmi
2015-11-15
The goal of resection of soft tissue sarcomas located in the extremity is to preserve limb function while completely excising the tumor with a margin of normal tissue. With surgery alone, one-third of patients with soft tissue sarcoma of the extremity will have local recurrence due to microscopic residual disease in the tumor bed. Currently, a limited number of intraoperative pathology-based techniques are used to assess margin status; however, few have been widely adopted due to sampling error and time constraints. To aid in intraoperative diagnosis, we developed a quantitative optical microscopy toolbox, which includes acriflavine staining, fluorescence microscopy, and analytic techniques called sparse component analysis and circle transform to yield quantitative diagnosis of tumor margins. A series of variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82 and 75%. The utility of this approach was tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78 and 82%. For comparison, if pathology was used to predict local recurrence in this data set, it would achieve a sensitivity of 29% and a specificity of 71%. These results indicate a robust approach for detecting microscopic residual disease, which is an effective predictor of local recurrence. PMID:25994353
Ragazzi, M; Rada, E C
2012-10-01
In the sector of municipal solid waste management the debate on the performances of conventional and novel thermo-chemical technologies is still relevant. When a plant must be constructed, decision makers often select a technology prior to analyzing the local environmental impact of the available options, as this type of study is generally developed when the design of the plant has been carried out. Additionally, in the literature there is a lack of comparative analyses of the contributions to local air pollution from different technologies. The present study offers a multi-step approach, based on pollutant emission factors and atmospheric dilution coefficients, for a local comparative analysis. With this approach it is possible to check if some assumptions related to the advantages of the novel thermochemical technologies, in terms of local direct impact on air quality, can be applied to municipal solid waste treatment. The selected processes concern combustion, gasification and pyrolysis, alone or in combination. The pollutants considered are both carcinogenic and non-carcinogenic. A case study is presented concerning the location of a plant in an alpine region and its contribution to the local air pollution. Results show that differences among technologies are less than expected. Performances of each technology are discussed in details. PMID:22795304
A hierarchy of local coupled cluster singles and doubles response methods for ionization potentials
NASA Astrophysics Data System (ADS)
Wälz, Gero; Usvyat, Denis; Korona, Tatiana; Schütz, Martin
2016-02-01
We present a hierarchy of local coupled cluster (CC) linear response (LR) methods to calculate ionization potentials (IPs), i.e., excited states with one electron annihilated relative to a ground state reference. The time-dependent perturbation operator V(t), as well as the operators related to the first-order (with respect to V(t)) amplitudes and multipliers, thus are not number conserving and have half-integer particle rank m. Apart from calculating IPs of neutral molecules, the method offers also the possibility to study ground and excited states of neutral radicals as ionized states of closed-shell anions. It turns out that for comparable accuracy IPs require a higher-order treatment than excitation energies; an IP-CC LR method corresponding to CC2 LR or the algebraic diagrammatic construction scheme through second order performs rather poorly. We therefore systematically extended the order with respect to the fluctuation potential of the IP-CC2 LR Jacobian up to IP-CCSD LR, keeping the excitation space of the first-order (with respect to V(t)) cluster operator restricted to the m = /1 2 ⊕ /3 2 subspace and the accuracy of the zero-order (ground-state) amplitudes at the level of CC2 or MP2. For the more expensive diagrams beyond the IP-CC2 LR Jacobian, we employ local approximations. The implemented methods are capable of treating large molecular systems with hundred atoms or more.
NASA Astrophysics Data System (ADS)
Huang, Bin; Wang, Ji; Du, Jianke; Guo, Yan; Ma, Tingfeng; Yi, Lijun
2016-06-01
The extended Kantorovich method is employed to study the local stress concentrations at the vicinity of free edges in symmetrically layered composite laminates subjected to uniaxial tensile load upon polynomial stress functions. The stress fields are initially assumed by means of the Lekhnitskii stress functions under the plane strain state. Applying the principle of complementary virtual work, the coupled ordinary differential equations are obtained in which the solutions can be obtained by solving a generalized eigenvalue problem. Then an iterative procedure is established to achieve convergent stress distributions. It should be noted that the stress function based extended Kantorovich method can satisfy both the traction-free and free edge stress boundary conditions during the iterative processes. The stress components near the free edges and in the interior regions are calculated and compared with those obtained results by finite element method (FEM). The convergent stresses have good agreements with those results obtained by three dimensional (3D) FEM. For generality, various layup configurations are considered for the numerical analysis. The results show that the proposed polynomial stress function based extended Kantorovich method is accurate and efficient in predicting the local stresses in composite laminates and computationally much more efficient than the 3D FEM.
Damage localization in a glass fiber reinforced composite plate via the surface interpolation method
NASA Astrophysics Data System (ADS)
Limongelli, M. P.; Carvelli, V.
2015-07-01
This work deals with the application to composite plates of the surface interpolation method (SIM) for damage localization. The procedure, which is a generalization to the two-dimensional case of the previously published Interpolation Damage Detection Method (IDDM), locates reductions of stiffness in two-dimensional structures such as plates. The method is based on the damage sensitive of a spline function accuracy in fitting the operational displacement shapes, relies on the so-called Gibbs’ phenomenon for splines. This phenomenon occurs when a spline function interpolates discontinuous functions and consists in sharp oscillations and overshoots (values higher than those of the function to be interpolated) near a discontinuous point. The operational deformed shapes are recovered from frequency response functions (FRF's) measured at different locations of the structure during vibrations. The accuracy of the spline interpolation is measured by an error function defined as the difference between the measured and interpolated operational deformed shapes. At a certain location an increase (statistically meaningful) of the interpolation error, with respect to a reference configuration, points out a localized variation of the operational shapes thus revealing the existence of damage. The accuracy of the surface interpolation method is experimentally assessed by impact hammer tests on glass fiber/vinylester composite plates progressively damaged and using finite element numerical modelling.
Two methods for the study of vortex patch evolution on locally refined grids
Minion, M.L.
1994-05-01
Two numerical methods for the solution of the two-dimensional Euler equations for incompressible flow on locally refined grids are presented. The first is a second order projection method adapted from the method of Bell, Colella, and Glaz. The second method is based on the vorticity-stream function form of the Euler equations and is designed to be free-stream preserving and conservative. Second order accuracy of both methods in time and space is established, and they are shown to agree on problems with a localized vorticity distribution. The filamentation of a perturbed patch of circular vorticity and the merger of two smooth vortex patches are studied. It is speculated that for nearly stable patches of vorticity, an arbitrarily small amount of viscosity is sufficient to effectively eliminate vortex filaments from the evolving patch and that the filamentation process affects the evolution of such patches very little. Solutions of the vortex merger problem show that filamentation is responsible for the creation of large gradients in the vorticity which, in the presence of an arbitrarily small viscosity, will lead to vortex merger. It is speculated that a small viscosity in this problem does not substantially affect the transition of the flow to a statistical equilibrium solution. The main contributions of this thesis concern the formulation and implementation of a projection for refined grids. A careful analysis of the adjointness relation between gradient and divergence operators for a refined grid MAC projection is presented, and a uniformly accurate, approximately stable projection is developed. An efficient multigrid method which exactly solves the projection is developed, and a method for casting certain approximate projections as MAC projections on refined grids is presented.
Formation of Silicon-Gold Eutectic Bond Using Localized Heating Method
NASA Astrophysics Data System (ADS)
Lin, Liwei; Cheng, Yu-Ting; Najafi, Khalil
1998-11-01
A new bonding technique is proposed by using localized heating to supplythe bonding energy.Heating is achieved by applying a dc current through micromachined heaters made of gold which serves as both the heating and bonding material.At the interface of silicon and gold, the formation of eutectic bond takes place in about 5 minutes.Assembly of two substrates in microfabrication processescan be achieved by using this method.In this paper the following important results are obtained:1) Gold diffuses into silicon to form a strong eutectic bond by means of localized heating.2) The bonding strength reaches the fracture toughness of the bulk silicon.3) This bonding technique greatly simplifies device fabrication andassembly processes.
Causal-Path Local Time-Stepping in the discontinuous Galerkin method for Maxwell's equations
NASA Astrophysics Data System (ADS)
Angulo, L. D.; Alvarez, J.; Teixeira, F. L.; Pantoja, M. F.; Garcia, S. G.
2014-01-01
We introduce a novel local time-stepping technique for marching-in-time algorithms. The technique is denoted as Causal-Path Local Time-Stepping (CPLTS) and it is applied for two time integration techniques: fourth-order low-storage explicit Runge-Kutta (LSERK4) and second-order Leap-Frog (LF2). The CPLTS method is applied to evolve Maxwell's curl equations using a Discontinuous Galerkin (DG) scheme for the spatial discretization. Numerical results for LF2 and LSERK4 are compared with analytical solutions and the Montseny's LF2 technique. The results show that the CPLTS technique improves the dispersive and dissipative properties of LF2-LTS scheme.
NASA Astrophysics Data System (ADS)
Li, W.
2015-08-01
China has over 271 million villages and less than the number in ten years ago in which there are 363 million villages. New rural construction indeed do some good for common villages but still destroy hundreds and thousands traditional village which contain great cultural, science, artistic values. In addition, traditional villages can't meet the increasing needs in more convenient and comfortable living conditions. Increasing population also makes traditional villages out of control in construction. With the background of this, we have to set up in traditional village protection. This article put forward an idea in protection which make use of landscape localization to pursue the sustainable development and vernacular landscape protection. Tangyin Town is a famous trade center in history and left many cultural heritage, especially historical buildings. Take Tangyin as a case study to apply the localization method which could guide other similar villages to achieve same goals.
Noonan, Kathleen; Miller, Dorothy; Sell, Katherine; Rubin, David
2013-11-01
Through their purchasing powers, government agencies can play a critical role in leveraging markets to create healthier foods. In the United States, state and local governments are implementing creative approaches to procuring healthier foods, moving beyond the traditional regulatory relationship between government and vendors. They are forging new partnerships between government, non-profits, and researchers to increase healthier purchasing. On the basis of case examples, this article proposes a pathway in which state and local government agencies can use the procurement cycle to improve healthy eating. PMID:23803713
Gradient and curvature from the photometric-stereo method, including local confidence estimation
Woodham, R.J.
1994-11-01
The photometric-stereo method is one technique for three-dimensional shape determination that has been implemented in a variety of experimental settings and that has produced consistently good results. The idea is to use intensity values recorded from multiple images obtained from the same viewpoint but under different conditions of illumination. The resulting radiometric constraint makes it possible to obtain local estimates of both surface orientation and surface curvature without requiring either global smoothness assumptions or prior image segmentation. Photometric stereo is moved one step closer to practical possibility by a description of an experimental setting in which surface gradient estimation is achieved on full-frame video data at near-video-frame rates (i.e., 15 Hz). The implementation uses commercially available hardware. Reflectance is modeled empirically with measurements obtained from a calibration sphere. Estimation of the gradient ({ital p},{ital q}) requires only simple table lookup. Curvature estimation additionally uses the reflectance map {ital R}({ital p},{ital q}). The required lookup table and reflectance maps are derived during calibration. Because reflectance is modeled empirically, no prior physical model of the reflectance characteristics of the objects to be analyzed is assumed. At the same time, if a good physical model is available, it can be retrofitted to the method for implementation purposes. Photometric stereo is subject to error in the presence of cast shadows and interreflection. No purely local technique can succeed because these phenomena are inherently nonlocal. Nevertheless, it is demonstrated that one can exploit the redundancy in three-light-source photometric stereo to detect locally, in most cases, the presence of cast shadows and interreflection. Detection is facilitated by the explicit inclusion of a local confidence estimate in the lookup table used for gradient estimation.