Sample records for distance calculation method

  1. [A New Distance Metric between Different Stellar Spectra: the Residual Distribution Distance].

    PubMed

    Liu, Jie; Pan, Jing-chang; Luo, A-li; Wei, Peng; Liu, Meng

    2015-12-01

    Distance metric is an important issue for the spectroscopic survey data processing, which defines a calculation method of the distance between two different spectra. Based on this, the classification, clustering, parameter measurement and outlier data mining of spectral data can be carried out. Therefore, the distance measurement method has some effect on the performance of the classification, clustering, parameter measurement and outlier data mining. With the development of large-scale stellar spectral sky surveys, how to define more efficient distance metric on stellar spectra has become a very important issue in the spectral data processing. Based on this problem and fully considering of the characteristics and data features of the stellar spectra, a new distance measurement method of stellar spectra named Residual Distribution Distance is proposed. While using this method to measure the distance, the two spectra are firstly scaled and then the standard deviation of the residual is used the distance. Different from the traditional distance metric calculation methods of stellar spectra, when used to calculate the distance between stellar spectra, this method normalize the two spectra to the same scale, and then calculate the residual corresponding to the same wavelength, and the standard error of the residual spectrum is used as the distance measure. The distance measurement method can be used for stellar classification, clustering and stellar atmospheric physical parameters measurement and so on. This paper takes stellar subcategory classification as an example to test the distance measure method. The results show that the distance defined by the proposed method is more effective to describe the gap between different types of spectra in the classification than other methods, which can be well applied in other related applications. At the same time, this paper also studies the effect of the signal to noise ratio (SNR) on the performance of the proposed method. The result show that the distance is affected by the SNR. The smaller the signal-to-noise ratio is, the greater impact is on the distance; While SNR is larger than 10, the signal-to-noise ratio has little effect on the performance for the classification.

  2. A Calculation Method of Electric Distance and Subarea Division Application Based on Transmission Impedance

    NASA Astrophysics Data System (ADS)

    Fang, G. J.; Bao, H.

    2017-12-01

    The widely used method of calculating electric distances is sensitivity method. The sensitivity matrix is the result of linearization and based on the hypothesis that the active power and reactive power are decoupled, so it is inaccurate. In addition, it calculates the ratio of two partial derivatives as the relationship of two dependent variables, so there is no physical meaning. This paper presents a new method for calculating electrical distance, namely transmission impedance method. It forms power supply paths based on power flow tracing, then establishes generalized branches to calculate transmission impedances. In this paper, the target of power flow tracing is S instead of Q. Q itself has no direction and the grid delivers complex power so that S contains more electrical information than Q. By describing the power transmission relationship of the branch and drawing block diagrams in both forward and reverse directions, it can be found that the numerators of feedback parts of two block diagrams are all the transmission impedances. To ensure the distance is scalar, the absolute value of transmission impedance is defined as electrical distance. Dividing network according to the electric distances and comparing with the results of sensitivity method, it proves that the transmission impedance method can adapt to the dynamic change of system better and reach a reasonable subarea division scheme.

  3. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  4. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, Melvin D.

    1994-01-01

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position.

  5. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, M.D.

    1994-01-11

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position. 8 figures.

  6. Optical Distance Measurement Device And Method Thereof

    DOEpatents

    Bowers, Mark W.

    2004-06-15

    A system and method of efficiently obtaining distance measurements of a target by scanning the target. An optical beam is provided by a light source and modulated by a frequency source. The modulated optical beam is transmitted to an acousto-optical deflector capable of changing the angle of the optical beam in a predetermined manner to produce an output for scanning the target. In operation, reflected or diffused light from the target may be received by a detector and transmitted to a controller configured to calculate the distance to the target as well as the measurement uncertainty in calculating the distance to the target.

  7. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, J; Gu, X; Lu, W

    Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidatemore » and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will provide dosimetrically important label selection strategy in multi-atlas segmentation. CPRIT grant RP150485.« less

  9. Research on volume metrology method of large vertical energy storage tank based on internal electro-optical distance-ranging method

    NASA Astrophysics Data System (ADS)

    Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang

    2018-01-01

    A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.

  10. An iterative method for obtaining the optimum lightning location on a spherical surface

    NASA Technical Reports Server (NTRS)

    Chao, Gao; Qiming, MA

    1991-01-01

    A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.

  11. A new method for photon transport in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sato, T.; Ogawa, K.

    1999-12-01

    Monte Carlo methods are used to evaluate data methods such as scatter and attenuation compensation in single photon emission CT (SPECT), treatment planning in radiation therapy, and in many industrial applications. In Monte Carlo simulation, photon transport requires calculating the distance from the location of the emitted photon to the nearest boundary of each uniform attenuating medium along its path of travel, and comparing this distance with the length of its path generated at emission. Here, the authors propose a new method that omits the calculation of the location of the exit point of the photon from each voxel and of the distance between the exit point and the original position. The method only checks the medium of each voxel along the photon's path. If the medium differs from that in the voxel from which the photon was emitted, the authors calculate the location of the entry point in the voxel, and the length of the path is compared with the mean free path length generated by a random number. Simulations using the MCAT phantom show that the ratios of the calculation time were 1.0 for the voxel-based method, and 0.51 for the proposed method with a 256/spl times/256/spl times/256 matrix image, thereby confirming the effectiveness of the algorithm.

  12. High Performance Automatic Character Skinning Based on Projection Distance

    NASA Astrophysics Data System (ADS)

    Li, Jun; Lin, Feng; Liu, Xiuling; Wang, Hongrui

    2018-03-01

    Skeleton-driven-deformation methods have been commonly used in the character deformations. The process of painting skin weights for character deformation is a long-winded task requiring manual tweaking. We present a novel method to calculate skinning weights automatically from 3D human geometric model and corresponding skeleton. The method first, groups each mesh vertex of 3D human model to a skeleton bone by the minimum distance from a mesh vertex to each bone. Secondly, calculates each vertex's weights to the adjacent bones by the vertex's projection point distance to the bone joints. Our method's output can not only be applied to any kind of skeleton-driven deformation, but also to motion capture driven (mocap-driven) deformation. Experiments results show that our method not only has strong generality and robustness, but also has high performance.

  13. Generalising Ward's Method for Use with Manhattan Distances.

    PubMed

    Strauss, Trudie; von Maltitz, Michael Johan

    2017-01-01

    The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.

  14. Evaluation on Cost Overrun Risks of Long-distance Water Diversion Project Based on SPA-IAHP Method

    NASA Astrophysics Data System (ADS)

    Yuanyue, Yang; Huimin, Li

    2018-02-01

    Large investment, long route, many change orders and etc. are main causes for costs overrun of long-distance water diversion project. This paper, based on existing research, builds a full-process cost overrun risk evaluation index system for water diversion project, apply SPA-IAHP method to set up cost overrun risk evaluation mode, calculate and rank weight of every risk evaluation indexes. Finally, the cost overrun risks are comprehensively evaluated by calculating linkage measure, and comprehensive risk level is acquired. SPA-IAHP method can accurately evaluate risks, and the reliability is high. By case calculation and verification, it can provide valid cost overrun decision making information to construction companies.

  15. Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.

    PubMed

    Steel, Ruth Irene

    2015-01-01

    Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.

  16. Three-dimensional modeling and animation of two carpal bones: a technique.

    PubMed

    Green, Jason K; Werner, Frederick W; Wang, Haoyu; Weiner, Marsha M; Sacks, Jonathan M; Short, Walter H

    2004-05-01

    The objectives of this study were to (a). create 3D reconstructions of two carpal bones from single CT data sets and animate these bones with experimental in vitro motion data collected during dynamic loading of the wrist joint, (b). develop a technique to calculate the minimum interbone distance between the two carpal bones, and (c). validate the interbone distance calculation process. This method utilized commercial software to create the animations and an in-house program to interface with three-dimensional CAD software to calculate the minimum distance between the irregular geometries of the bones. This interbone minimum distance provides quantitative information regarding the motion of the bones studied and may help to understand and quantify the effects of ligamentous injury.

  17. Anaerobic work calculated in cycling time trials of different length.

    PubMed

    Mulder, Roy C; Noordhof, Dionne A; Malterer, Katherine R; Foster, Carl; de Koning, Jos J

    2015-03-01

    Previous research showed that gross efficiency (GE) declines during exercise and therefore influences the expenditure of anaerobic and aerobic resources. To calculate the anaerobic work produced during cycling time trials of different length, with and without a GE correction. Anaerobic work was calculated in 18 trained competitive cyclists during 4 time trials (500, 1000, 2000, and 4000-m). Two additional time trials (1000 and 4000 m) that were stopped at 50% of the corresponding "full" time trial were performed to study the rate of the decline in GE. Correcting for a declining GE during time-trial exercise resulted in a significant (P<.001) increase in anaerobically attributable work of 30%, with a 95% confidence interval of [25%, 36%]. A significant interaction effect between calculation method (constant GE, declining GE) and distance (500, 1000, 2000, 4000 m) was found (P<.001). Further analysis revealed that the constant-GE calculation method was different from the declining method for all distances and that anaerobic work calculated assuming a constant GE did not result in equal values for anaerobic work calculated over different time-trial distances (P<.001). However, correcting for a declining GE resulted in a constant value for anaerobically attributable work (P=.18). Anaerobic work calculated during short time trials (<4000 m) with a correction for a declining GE is increased by 30% [25%, 36%] and may represent anaerobic energy contributions during high-intensity exercise better than calculating anaerobic work assuming a constant GE.

  18. Walking tree heuristics for biological string alignment, gene location, and phylogenies

    NASA Astrophysics Data System (ADS)

    Cull, P.; Holloway, J. L.; Cavener, J. D.

    1999-03-01

    Basic biological information is stored in strings of nucleic acids (DNA, RNA) or amino acids (proteins). Teasing out the meaning of these strings is a central problem of modern biology. Matching and aligning strings brings out their shared characteristics. Although string matching is well-understood in the edit-distance model, biological strings with transpositions and inversions violate this model's assumptions. We propose a family of heuristics called walking trees to align biologically reasonable strings. Both edit-distance and walking tree methods can locate specific genes within a large string when the genes' sequences are given. When we attempt to match whole strings, the walking tree matches most genes, while the edit-distance method fails. We also give examples in which the walking tree matches substrings even if they have been moved or inverted. The edit-distance method was not designed to handle these problems. We include an example in which the walking tree "discovered" a gene. Calculating scores for whole genome matches gives a method for approximating evolutionary distance. We show two evolutionary trees for the picornaviruses which were computed by the walking tree heuristic. Both of these trees show great similarity to previously constructed trees. The point of this demonstration is that WHOLE genomes can be matched and distances calculated. The first tree was created on a Sequent parallel computer and demonstrates that the walking tree heuristic can be efficiently parallelized. The second tree was created using a network of work stations and demonstrates that there is suffient parallelism in the phylogenetic tree calculation that the sequential walking tree can be used effectively on a network.

  19. Precise Distances for Main-belt Asteroids in Only Two Nights

    NASA Astrophysics Data System (ADS)

    Heinze, Aren N.; Metchev, Stanimir

    2015-10-01

    We present a method for calculating precise distances to asteroids using only two nights of data from a single location—far too little for an orbit—by exploiting the angular reflex motion of the asteroids due to Earth’s axial rotation. We refer to this as the rotational reflex velocity method. While the concept is simple and well-known, it has not been previously exploited for surveys of main belt asteroids (MBAs). We offer a mathematical development, estimates of the errors of the approximation, and a demonstration using a sample of 197 asteroids observed for two nights with a small, 0.9-m telescope. This demonstration used digital tracking to enhance detection sensitivity for faint asteroids, but our distance determination works with any detection method. Forty-eight asteroids in our sample had known orbits prior to our observations, and for these we demonstrate a mean fractional error of only 1.6% between the distances we calculate and those given in ephemerides from the Minor Planet Center. In contrast to our two-night results, distance determination by fitting approximate orbits requires observations spanning 7-10 nights. Once an asteroid’s distance is known, its absolute magnitude and size (given a statistically estimated albedo) may immediately be calculated. Our method will therefore greatly enhance the efficiency with which 4m and larger telescopes can probe the size distribution of small (e.g., 100 m) MBAs. This distribution remains poorly known, yet encodes information about the collisional evolution of the asteroid belt—and hence the history of the Solar System.

  20. Evanescent field characteristics of eccentric core optical fiber for distributed sensing.

    PubMed

    Liu, Jianxia; Yuan, Libo

    2014-03-01

    Fundamental core-mode cutoff and evanescent field are considered for an eccentric core optical fiber (ECOF). A method has been proposed to calculate the core-mode cutoff by solving the eigenvalue equations of an ECOF. Using conformal mapping, the asymmetric geometrical structure can be transformed into a simple, easily solved axisymmetric optical fiber with three layers. The variation of the fundamental core-mode cut-off frequency (V(c)) is also calculated with different eccentric distances, wavelengths, core radii, and coating refractive indices. The fractional power of evanescent fields for ECOF is also calculated with the eccentric distances and coating refractive indices. These calculations are necessary to design the structural parameters of an ECOF for long-distance, single-mode distributed evanescent field absorption sensors.

  1. Measuring Distances Using Digital Cameras

    ERIC Educational Resources Information Center

    Kendal, Dave

    2007-01-01

    This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…

  2. Measuring Distance of Fuzzy Numbers by Trapezoidal Fuzzy Numbers

    NASA Astrophysics Data System (ADS)

    Hajjari, Tayebeh

    2010-11-01

    Fuzzy numbers and more generally linguistic values are approximate assessments, given by experts and accepted by decision-makers when obtaining value that is more accurate is impossible or unnecessary. Distance between two fuzzy numbers plays an important role in linguistic decision-making. It is reasonable to define a fuzzy distance between fuzzy objects. To achieve this aim, the researcher presents a new distance measure for fuzzy numbers by means of improved centroid distance method. The metric properties are also studied. The advantage is the calculation of the proposed method is far simple than previous approaches.

  3. Authenticating concealed private data while maintaining concealment

    DOEpatents

    Thomas, Edward V [Albuquerque, NM; Draelos, Timothy J [Albuquerque, NM

    2007-06-26

    A method of and system for authenticating concealed and statistically varying multi-dimensional data comprising: acquiring an initial measurement of an item, wherein the initial measurement is subject to measurement error; applying a transformation to the initial measurement to generate reference template data; acquiring a subsequent measurement of an item, wherein the subsequent measurement is subject to measurement error; applying the transformation to the subsequent measurement; and calculating a Euclidean distance metric between the transformed measurements; wherein the calculated Euclidean distance metric is identical to a Euclidean distance metric between the measurement prior to transformation.

  4. The Minimum Binding Energy and Size of Doubly Muonic D3 Molecule

    NASA Astrophysics Data System (ADS)

    Eskandari, M. R.; Faghihi, F.; Mahdavi, M.

    The minimum energy and size of doubly muonic D3 molecule, which two of the electrons are replaced by the much heavier muons, are calculated by the well-known variational method. The calculations show that the system possesses two minimum positions, one at typically muonic distance and the second at the atomic distance. It is shown that at the muonic distance, the effective charge, zeff is 2.9. We assumed a symmetric planar vibrational model between two minima and an oscillation potential energy is approximated in this region.

  5. Implementation of hierarchical clustering using k-mer sparse matrix to analyze MERS-CoV genetic relationship

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Ulul, E. D.; Hura, H. F. A.; Siswantining, T.

    2017-07-01

    Hierarchical clustering is one of effective methods in creating a phylogenetic tree based on the distance matrix between DNA (deoxyribonucleic acid) sequences. One of the well-known methods to calculate the distance matrix is k-mer method. Generally, k-mer is more efficient than some distance matrix calculation techniques. The steps of k-mer method are started from creating k-mer sparse matrix, and followed by creating k-mer singular value vectors. The last step is computing the distance amongst vectors. In this paper, we analyze the sequences of MERS-CoV (Middle East Respiratory Syndrome - Coronavirus) DNA by implementing hierarchical clustering using k-mer sparse matrix in order to perform the phylogenetic analysis. Our results show that the ancestor of our MERS-CoV is coming from Egypt. Moreover, we found that the MERS-CoV infection that occurs in one country may not necessarily come from the same country of origin. This suggests that the process of MERS-CoV mutation might not only be influenced by geographical factor.

  6. Agreement between methods of measurement of mean aortic wall thickness by MRI.

    PubMed

    Rosero, Eric B; Peshock, Ronald M; Khera, Amit; Clagett, G Patrick; Lo, Hao; Timaran, Carlos

    2009-03-01

    To assess the agreement between three methods of calculation of mean aortic wall thickness (MAWT) using magnetic resonance imaging (MRI). High-resolution MRI of the infrarenal abdominal aorta was performed on 70 subjects with a history of coronary artery disease who were part of a multi-ethnic population-based sample. MAWT was calculated as the mean distance between the adventitial and luminal aortic boundaries using three different methods: average distance at four standard positions (AWT-4P), average distance at 100 automated positions (AWT-100P), and using a mathematical computation derived from the total vessel and luminal areas (AWT-VA). Bland-Altman plots and Passing-Bablok regression analyses were used to assess agreement between methods. Bland-Altman analyses demonstrated a positive bias of 3.02+/-7.31% between the AWT-VA and the AWT-4P methods, and of 1.76+/-6.82% between the AWT-100P and the AWT-4P methods. Passing-Bablok regression analyses demonstrated constant bias between the AWT-4P method and the other two methods. Proportional bias was, however, not evident among the three methods. MRI methods of measurement of MAWT using a limited number of positions of the aortic wall systematically underestimate the MAWT value compared with the method that calculates MAWT from the vessel areas. Copyright (c) 2009 Wiley-Liss, Inc.

  7. Relationship of the actual thick intraocular lens optic to the thin lens equivalent.

    PubMed

    Holladay, J T; Maverick, K J

    1998-09-01

    To theoretically derive and empirically validate the relationship between the actual thick intraocular lens and the thin lens equivalent. Included in the study were 12 consecutive adult patients ranging in age from 54 to 84 years (mean +/- SD, 73.5 +/- 9.4 years) with best-corrected visual acuity better than 20/40 in each eye. Each patient had bilateral intraocular lens implants of the same style, placed in the same location (bag or sulcus) by the same surgeon. Preoperatively, axial length, keratometry, refraction, and vertex distance were measured. Postoperatively, keratometry, refraction, vertex distance, and the distance from the vertex of the cornea to the anterior vertex of the intraocular lens (AV(PC1)) were measured. Alternatively, the distance (AV(PC1)) was then back-calculated from the vergence formula used for intraocular lens power calculations. The average (+/-SD) of the absolute difference in the two methods was 0.23 +/- 0.18 mm, which would translate to approximately 0.46 diopters. There was no statistical difference between the measured and calculated values; the Pearson product-moment correlation coefficient from linear regression was 0.85 (r2 = .72, F = 56). The average intereye difference was -0.030 mm (SD, 0.141 mm; SEM, 0.043 mm) using the measurement method and +0.124 mm (SD, 0.412 mm; SEM, 0.124 mm) using the calculation method. The relationship between the actual thick intraocular lens and the thin lens equivalent has been determined theoretically and demonstrated empirically. This validation provides the manufacturer and surgeon additional confidence and utility for lens constants used in intraocular lens power calculations.

  8. Evaluation of jamming efficiency for the protection of a single ground object

    NASA Astrophysics Data System (ADS)

    Matuszewski, Jan

    2018-04-01

    The electronic countermeasures (ECM) include methods to completely prevent or restrict the effective use of the electromagnetic spectrum by the opponent. The most widespread means of disorganizing the operation of electronic devices is to create active and passive radio-electronic jamming. The paper presents the way of jamming efficiency calculations for protecting ground objects against the radars mounted on the airborne platforms. The basic mathematical formulas for calculating the efficiency of active radar jamming are presented. The numerical calculations for ground object protection are made for two different electronic warfare scenarios: the jammer is placed very closely and in a determined distance from the protecting object. The results of these calculations are presented in the appropriate figures showing the minimal distance of effective jamming. The realization of effective radar jamming in electronic warfare systems depends mainly on the precise knowledge of radar and the jammer's technical parameters, the distance between them, the assumed value of the degradation coefficient, the conditions of electromagnetic energy propagation and the applied jamming method. The conclusions from these calculations facilitate making a decision regarding how jamming should be conducted to achieve high efficiency during the electronic warfare training.

  9. Phylo_dCor: distance correlation as a novel metric for phylogenetic profiling.

    PubMed

    Sferra, Gabriella; Fratini, Federica; Ponzi, Marta; Pizzi, Elisabetta

    2017-09-05

    Elaboration of powerful methods to predict functional and/or physical protein-protein interactions from genome sequence is one of the main tasks in the post-genomic era. Phylogenetic profiling allows the prediction of protein-protein interactions at a whole genome level in both Prokaryotes and Eukaryotes. For this reason it is considered one of the most promising methods. Here, we propose an improvement of phylogenetic profiling that enables handling of large genomic datasets and infer global protein-protein interactions. This method uses the distance correlation as a new measure of phylogenetic profile similarity. We constructed robust reference sets and developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation that makes it applicable to large genomic data. Using Saccharomyces cerevisiae and Escherichia coli genome datasets, we showed that Phylo-dCor outperforms phylogenetic profiling methods previously described based on the mutual information and Pearson's correlation as measures of profile similarity. In this work, we constructed and assessed robust reference sets and propose the distance correlation as a measure for comparing phylogenetic profiles. To make it applicable to large genomic data, we developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation. Two R scripts that can be run on a wide range of machines are available upon request.

  10. Correction of spin diffusion during iterative automated NOE assignment

    NASA Astrophysics Data System (ADS)

    Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael

    2004-04-01

    Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.

  11. Protein structure estimation from NMR data by matrix completion.

    PubMed

    Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing

    2017-09-01

    Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.

  12. Pathloss Calculation Using the Transmission Line Matrix and Finite Difference Time Domain Methods With Coarse Grids

    DOE PAGES

    Nutaro, James; Kuruganti, Teja

    2017-02-24

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  13. Analysis of Optimal Transport Route Determination of Oil Palm Fresh Fruit Bunches from Plantation to Processing Factory

    NASA Astrophysics Data System (ADS)

    Tarigan, U.; Sidabutar, R. F.; Tarigan, U. P. P.; Chen, A.

    2018-04-01

    Manufacturers engaged in the business, producing CPO and kernels whose raw materials are oil palm fresh fruit bunches taken from their own plantation, generally face problems of transporting from plantation to factory where there is often a change of distance traveled by the truck the carrier of FFB is due to non-specific transport instructions. The research was conducted to determine the optimal transportation route in terms of distance, time and route number. The determination of this transportation route is solved using Nearest Neighbours and Clarke & Wright Savings methods. Based on the calculations performed then found in area I with method Nearest Neighbours has a distance of 200.78 Km while Clarke & Wright Savings as with a result of 214.09 Km. As for the harvest area, II obtained results with Nearest Neighbours method of 264.37 Km and Clarke & Wright Savings method with a total distance of 264.33 Km. Based on the calculation of the time to do all the activities of transporting FFB juxtaposed with the work time of the driver got the reduction of conveyance from 8 units to 5 units. There is also improvement of fuel efficiency by 0.8%.

  14. Scalable parallel distance field construction for large-scale applications

    DOE PAGES

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; ...

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  15. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.

  16. The extraction of drug-disease correlations based on module distance in incomplete human interactome.

    PubMed

    Yu, Liang; Wang, Bingbo; Ma, Xiaoke; Gao, Lin

    2016-12-23

    Extracting drug-disease correlations is crucial in unveiling disease mechanisms, as well as discovering new indications of available drugs, or drug repositioning. Both the interactome and the knowledge of disease-associated and drug-associated genes remain incomplete. We present a new method to predict the associations between drugs and diseases. Our method is based on a module distance, which is originally proposed to calculate distances between modules in incomplete human interactome. We first map all the disease genes and drug genes to a combined protein interaction network. Then based on the module distance, we calculate the distances between drug gene sets and disease gene sets, and take the distances as the relationships of drug-disease pairs. We also filter possible false positive drug-disease correlations by p-value. Finally, we validate the top-100 drug-disease associations related to six drugs in the predicted results. The overlapping between our predicted correlations with those reported in Comparative Toxicogenomics Database (CTD) and literatures, and their enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways demonstrate our approach can not only effectively identify new drug indications, but also provide new insight into drug-disease discovery.

  17. Development of a S/w System for Relative Positioning Using GPS Carrier Phase

    NASA Astrophysics Data System (ADS)

    Ahn, Yong-Won; Kim, Chun-Hwey; Park, Pil-Ho; Park, Jong-Uk; Jo, Jeong-Ho

    1997-12-01

    We developed a GPS phase data processing S/W system which calculates baseline vectors and distances between two points located in the surface of the Earth. For this development a Double-Difference method and L1 carrier phase data from GPS(Global Positioning System) were used. This S/W system consists of four main parts : satellite position calculation, Single-Difference equation, Double-Difference equation, and correlation. To verify our S/W, we fixed KAO(N36.37, E127.37, H77.61m), one of the International GPS Services for Geodynamics, which is located at Tae-Jon, and we measured baseline vectors and relative distances with data from observations at approximate baseline distances of 2.7, 42.1, 81.1, 146.6km. Then we compared the vectors and distances with the data which we obtained from the GPSurvery S/W system, with the L1/L2 ION-Free method and broadcast ephemeris. From the comparison of the vectors and distances with the data from the GPSurvey S/W system, we found baseline vectors X, Y, Z and baseline distances matched well within the extent of 50cm and 10cm, respectively.

  18. Estimating plant distance in maize using Unmanned Aerial Vehicle (UAV).

    PubMed

    Zhang, Jinshui; Basso, Bruno; Price, Richard F; Putman, Gregory; Shuai, Guanyuan

    2018-01-01

    Distance between rows and plants are essential parameters that affect the final grain yield in row crops. This paper presents the results of research intended to develop a novel method to quantify the distance between maize plants at field scale using an Unmanned Aerial Vehicle (UAV). Using this method, we can recognize maize plants as objects and calculate the distance between plants. We initially developed our method by training an algorithm in an indoor facility with plastic corn plants. Then, the method was scaled up and tested in a farmer's field with maize plant spacing that exhibited natural variation. The results of this study demonstrate that it is possible to precisely quantify the distance between maize plants. We found that accuracy of the measurement of the distance between maize plants depended on the height above ground level at which UAV imagery was taken. This study provides an innovative approach to quantify plant-to-plant variability and, thereby final crop yield estimates.

  19. Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method

    NASA Technical Reports Server (NTRS)

    Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.

    1990-01-01

    A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.

  20. On Stellar Winds as a Source of Mass: Applying Bondi-Hoyle-Lyttleton Accretion

    NASA Astrophysics Data System (ADS)

    Detweiler, L. G.; Yates, K.; Siem, E.

    2017-12-01

    The interaction between planets orbiting stars and the stellar wind that stars emit is investigated and explored. The main goal of this research is to devise a method of calculating the amount of mass accumulated by an arbitrary planet from the stellar wind of its parent star via accretion processes. To achieve this goal, the Bondi-Hoyle-Lyttleton (BHL) mass accretion rate equation and model is employed. In order to use the BHL equation, various parameters of the stellar wind is required to be known, including the velocity, density, and speed of sound of the wind. In order to create a method that is applicable to arbitrary planets orbiting arbitrary stars, Eugene Parker's isothermal stellar wind model is used to calculate these stellar wind parameters. In an isothermal wind, the speed of sound is simple to compute, however the velocity and density equations are transcendental and so the solutions must be approximated using a numerical approximation method. By combining Eugene Parker's isothermal stellar wind model with the BHL accretion equation, a method for computing planetary accretion rates inside a star's stellar wind is realized. This method is then applied to a variety of scenarios. First, this method is used to calculate the amount of mass that our solar system's planets will accrete from the solar wind throughout our Sun's lifetime. Then, some theoretical situations are considered. We consider the amount of mass various brown dwarfs would accrete from the solar wind of our Sun throughout its lifetime if they were orbiting the Sun at Jupiter's distance. For very high mass brown dwarfs, a significant amount of mass is accreted. In the case of the brown dwarf 15 Sagittae B, it actually accretes enough mass to surpass the mass limit for hydrogen fusion. Since 15 Sagittae B is orbiting a star that is very similar to our Sun, this encouraged making calculations for 15 Sagittae B orbiting our Sun at its true distance from its star, 15 Sagittae. It was found that at this distance, it does not accrete enough mass to surpass the mass limit for hydrogen fusion. Finally, we apply this method to brown dwarfs orbiting a 15 solar mass star at Jupiter's distance. It is found that a significantly smaller amount of mass is accreted when compared to the same brown dwarfs orbiting our Sun at the same distance.

  1. [Fast discrimination of edible vegetable oil based on Raman spectroscopy].

    PubMed

    Zhou, Xiu-Jun; Dai, Lian-Kui; Li, Sheng

    2012-07-01

    A novel method to fast discriminate edible vegetable oils by Raman spectroscopy is presented. The training set is composed of different edible vegetable oils with known classes. Based on their original Raman spectra, baseline correction and normalization were applied to obtain standard spectra. Two characteristic peaks describing the unsaturated degree of vegetable oil were selected as feature vectors; then the centers of all classes were calculated. For an edible vegetable oil with unknown class, the same pretreatment and feature extraction methods were used. The Euclidian distances between the feature vector of the unknown sample and the center of each class were calculated, and the class of the unknown sample was finally determined by the minimum distance. For 43 edible vegetable oil samples from seven different classes, experimental results show that the clustering effect of each class was more obvious and the class distance was much larger with the new feature extraction method compared with PCA. The above classification model can be applied to discriminate unknown edible vegetable oils rapidly and accurately.

  2. Exploratory Lattice QCD Study of the Rare Kaon Decay K^{+}→π^{+}νν[over ¯].

    PubMed

    Bai, Ziyuan; Christ, Norman H; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T

    2017-06-23

    We report a first, complete lattice QCD calculation of the long-distance contribution to the K^{+}→π^{+}νν[over ¯] decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.

  3. Exploratory Lattice QCD Study of the Rare Kaon Decay K+→π+ν ν ¯

    NASA Astrophysics Data System (ADS)

    Bai, Ziyuan; Christ, Norman H.; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T.; Rbc-Ukqcd Collaboration

    2017-06-01

    We report a first, complete lattice QCD calculation of the long-distance contribution to the K+→π+ν ν ¯ decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.

  4. A Simple Spreadsheet Program for the Calculation of Lattice-Site Distributions

    ERIC Educational Resources Information Center

    McCaffrey, John G.

    2009-01-01

    A simple spreadsheet program is presented that can be used by undergraduate students to calculate the lattice-site distributions in solids. A major strength of the method is the natural way in which the correct number of ions or atoms are present, or absent, at specific lattice distances. The expanding-cube method utilized is straightforward to…

  5. Optimizing distance-based methods for large data sets

    NASA Astrophysics Data System (ADS)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  6. SU-F-T-125: Radial Dose Distributions From Carbon Ions of Therapeutic Energies Calculated with Geant4-DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassiliev, O

    Purpose: Radial dose distribution D(r) is the dose as a function of lateral distance from the path of a heavy charged particle. Its main application is in modelling of biological effects of heavy ions, including applications to hadron therapy. It is the main physical parameter of a broad group of radiobiological models known as the amorphous track models. Our purpose was to calculate D(r) with Monte Carlo for carbon ions of therapeutic energies, find a simple formula for D(r) and fit it to the Monte Carlo data. Methods: All calculations were performed with Geant4-DNA code, for carbon ion energies frommore » 10 to 400 MeV/u (ranges in water: ∼ 0.4 mm to 27 cm). The spatial resolution of dose distribution in the lateral direction was 1 nm. Electron tracking cut off energy was 11 eV (ionization threshold). The maximum lateral distance considered was 10 µm. Over this distance, D(r) decreases with distance by eight orders of magnitude. Results: All calculated radial dose distributions had a similar shape dominated by the well-known inverse square dependence on the distance. Deviations from the inverse square law were observed close to the beam path (r<10 nm) and at large distances (r >1 µm). At small and large distances D(r) decreased, respectively, slower and faster than the inverse square of distance. A formula for D(r) consistent with this behavior was found and fitted to the Monte Carlo data. The accuracy of the fit was better than 10% for all distances considered. Conclusion: We have generated a set of radial dose distributions for carbon ions that covers the entire range of therapeutic energies, for distances from the ion path of up to 10 µm. The latter distance is sufficient for most applications because dose beyond 10 µm is extremely low.« less

  7. Collision for Li++He System. I. Potential Curves and Non-Adiabatic Coupling Matrix Elements

    NASA Astrophysics Data System (ADS)

    Yoshida, Junichi; O-Ohata, Kiyosi

    1984-02-01

    The potential curves and the non-adiabatic coupling matrix elements for the Li++He collision system were computed. The SCF molecular orbitals were constructed with the CGTO atomic bases centered on each nucleus and the center of mass of two nuclei. The SCF and CI calculations were done at various internuclear distances in the range of 0.1˜25.0 a.u. The potential energies and the wavefunctions were calculated with good approximation over whole internuclear distance. The non-adiabatic coupling matrix elements were calculated with the tentative method in which the ETF are approximately taken into account.

  8. Research on Signature Verification Method Based on Discrete Fréchet Distance

    NASA Astrophysics Data System (ADS)

    Fang, J. L.; Wu, W.

    2018-05-01

    This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.

  9. [A research in speech endpoint detection based on boxes-coupling generalization dimension].

    PubMed

    Wang, Zimei; Yang, Cuirong; Wu, Wei; Fan, Yingle

    2008-06-01

    In this paper, a new calculating method of generalized dimension, based on boxes-coupling principle, is proposed to overcome the edge effects and to improve the capability of the speech endpoint detection which is based on the original calculating method of generalized dimension. This new method has been applied to speech endpoint detection. Firstly, the length of overlapping border was determined, and through calculating the generalized dimension by covering the speech signal with overlapped boxes, three-dimension feature vectors including the box dimension, the information dimension and the correlation dimension were obtained. Secondly, in the light of the relation between feature distance and similarity degree, feature extraction was conducted by use of common distance. Lastly, bi-threshold method was used to classify the speech signals. The results of experiment indicated that, by comparison with the original generalized dimension (OGD) and the spectral entropy (SE) algorithm, the proposed method is more robust and effective for detecting the speech signals which contain different kinds of noise in different signal noise ratio (SNR), especially in low SNR.

  10. Comparison of analytical methods for calculation of wind loads

    NASA Technical Reports Server (NTRS)

    Minderman, Donald J.; Schultz, Larry L.

    1989-01-01

    The following analysis is a comparison of analytical methods for calculation of wind load pressures. The analytical methods specified in ASCE Paper No. 3269, ANSI A58.1-1982, the Standard Building Code, and the Uniform Building Code were analyzed using various hurricane speeds to determine the differences in the calculated results. The winds used for the analysis ranged from 100 mph to 125 mph and applied inland from the shoreline of a large open body of water (i.e., an enormous lake or the ocean) a distance of 1500 feet or ten times the height of the building or structure considered. For a building or structure less than or equal to 250 feet in height acted upon by a wind greater than or equal to 115 mph, it was determined that the method specified in ANSI A58.1-1982 calculates a larger wind load pressure than the other methods. For a building or structure between 250 feet and 500 feet tall acted upon by a wind rangind from 100 mph to 110 mph, there is no clear choice of which method to use; for these cases, factors that must be considered are the steady-state or peak wind velocity, the geographic location, the distance from a large open body of water, and the expected design life and its risk factor.

  11. Similarities among receptor pockets and among compounds: analysis and application to in silico ligand screening.

    PubMed

    Fukunishi, Yoshifumi; Mikami, Yoshiaki; Nakamura, Haruki

    2005-09-01

    We developed a new method to evaluate the distances and similarities between receptor pockets or chemical compounds based on a multi-receptor versus multi-ligand docking affinity matrix. The receptors were classified by a cluster analysis based on calculations of the distance between receptor pockets. A set of low homologous receptors that bind a similar compound could be classified into one cluster. Based on this line of reasoning, we proposed a new in silico screening method. According to this method, compounds in a database were docked to multiple targets. The new docking score was a slightly modified version of the multiple active site correction (MASC) score. Receptors that were at a set distance from the target receptor were not included in the analysis, and the modified MASC scores were calculated for the selected receptors. The choice of the receptors is important to achieve a good screening result, and our clustering of receptors is useful to this purpose. This method was applied to the analysis of a set of 132 receptors and 132 compounds, and the results demonstrated that this method achieves a high hit ratio, as compared to that of a uniform sampling, using a receptor-ligand docking program, Sievgene, which was newly developed with a good docking performance yielding 50.8% of the reconstructed complexes at a distance of less than 2 A RMSD.

  12. A method for calculating a real-gas two-dimensional nozzle contour including the effects of gamma

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Boney, L. R.

    1975-01-01

    A method for calculating two-dimensional inviscid nozzle contours for a real gas or an ideal gas by the method of characteristics is described. The method consists of a modification of an existing nozzle computer program. The ideal-gas nozzle contour can be calculated for any constant value of gamma. Two methods of calculating the center-line boundary values of the Mach number in the throat region are also presented. The use of these three methods of calculating the center-line Mach number distribution in the throat region can change the distance from the throat to the inflection point by a factor of 2.5. A user's guide is presented for input to the computer program for both the two-dimensional and axisymmetric nozzle contours.

  13. The effect of uncertainties in distance-based ranking methods for multi-criteria decision making

    NASA Astrophysics Data System (ADS)

    Jaini, Nor I.; Utyuzhnikov, Sergei V.

    2017-08-01

    Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.

  14. Braking distance algorithm for autonomous cars using road surface recognition

    NASA Astrophysics Data System (ADS)

    Kavitha, C.; Ashok, B.; Nanthagopal, K.; Desai, Rohan; Rastogi, Nisha; Shetty, Siddhanth

    2017-11-01

    India is yet to accept semi/fully - autonomous cars and one of the reasons, was loss of control on bad roads. For a better handling on these roads we require advanced braking and that can be done by adapting electronics into the conventional type of braking. In Recent years, the automation in braking system led us to various benefits like traction control system, anti-lock braking system etc. This research work describes and experiments the method for recognizing road surface profile and calculating braking distance. An ultra-sonic surface recognition sensor, mounted underneath the car will send a high frequency wave on to the road surface, which is received by a receiver with in the sensor, it calculates the time taken for the wave to rebound and thus calculates the distance from the point where sensor is mounted. A displacement graph will be plotted based on the output of the sensor. A relationship can be derived between the displacement plot and roughness index through which the friction coefficient can be derived in Matlab for continuous calculation throughout the distance travelled. Since it is a non-contact type of profiling, it is non-destructive. The friction coefficient values received in real-time is used to calculate optimum braking distance. This system, when installed on normal cars can also be used to create a database of road surfaces, especially in cities, which can be shared with other cars. This will help in navigation as well as making the cars more efficient.

  15. Confronting the Gaia and NLTE spectroscopic parallaxes for the FGK stars

    NASA Astrophysics Data System (ADS)

    Sitnova, Tatyana; Mashonkina, Lyudmila; Pakhomov, Yury

    2018-04-01

    The understanding of the chemical evolution of the Galaxy relies on the stellar chemical composition. Accurate atmospheric parameters is a prerequisite of determination of accurate chemical abundances. For late type stars with known distance, surface gravity (log g) can be calculated from well-known relation between stellar mass, T eff, and absolute bolometric magnitude. This method weakly depends on model atmospheres, and provides reliable log g. However, accurate distances are available for limited number of stars. Another way to determine log g for cool stars is based on ionisation equilibrium, i.e. consistent abundances from lines of neutral and ionised species. In this study we determine atmospheric parameters moving step-by-step from well-studied nearby dwarfs to ultra-metal poor (UMP) giants. In each sample, we select stars with the most reliable T eff based on photometry and the distance-based log g, and compare with spectroscopic gravity calculated taking into account deviations from local thermodinamic equilibrium (LTE). After that, we apply spectroscopic method of log g determination to other stars of the sample with unknown distances.

  16. A method to calculate synthetic waveforms in stratified VTI media

    NASA Astrophysics Data System (ADS)

    Wang, W.; Wen, L.

    2012-12-01

    Transverse isotropy with a vertical axis of symmetry (VTI) may be an important material property in the Earth's interior. In this presentation, we develop a method to calculate synthetic seismograms for wave propagation in stratified VTI media. Our method is based on the generalized reflection and transmission method (GRTM) (Luco & Apsel 1983). We extend it to transversely isotropic VTI media. GRTM has the advantage of remaining stable in high frequency calculations compared to the Haskell Matrix method (Haskell 1964), which explicitly excludes the exponential growth terms in the propagation matrix and is limited to low frequency computation. In the implementation, we also improve GRTM in two aspects. 1) We apply the Shanks transformation (Bender & Orszag 1999) to improve the convergence rate of convergence. This improvement is especially important when the depths of source and receiver are close. 2) We adopt a self-adaptive Simpson integration method (Chen & Zhang 2001) in the discrete wavenumber integration so that the integration can still be efficiently carried out at large epicentral distances. Because the calculation is independent in each frequency, the program can also be effectively implemented in parallel computing. Our method provides a powerful tool to synthesize broadband seismograms of VIT media at a large epicenter distance range. We will present examples of using the method to study possible transverse isotropy in the upper mantle and the lowermost mantle.

  17. Distance dependence in photoinduced intramolecular electron transfer. Additional remarks and calculations

    NASA Astrophysics Data System (ADS)

    Larsson, Sven; Volosov, Andrey

    1987-12-01

    Rate constants for photoinduced intramolecular electron transfer are calculated for four of the molecules studied by Hush et al. The electronic factor is obtained in quantum chemical calculations using the CNDO/S method. The results agree reasonably well with experiments for the forward reaction. Possible reasons for the disagreement for the charge recombination process are offered.

  18. [Calculating the stark broadening of welding arc spectra by Fourier transform method].

    PubMed

    Pan, Cheng-Gang; Hua, Xue-Ming; Zhang, Wang; Li, Fang; Xiao, Xiao

    2012-07-01

    It's the most effective and accurate method to calculate the electronic density of plasma by using the Stark width of the plasma spectrum. However, it's difficult to separate Stark width from the composite spectrum linear produced by several mechanisms. In the present paper, Fourier transform was used to separate the Lorentz linear from the spectrum observed, thus to get the accurate Stark width. And we calculated the distribution of the TIG welding arc plasma. This method does not need to measure arc temperature accurately, to measure the width of the plasma spectrum broadened by instrument, and has the function to reject the noise data. The results show that, on the axis, the electron density of TIG welding arc decreases with the distance from tungsten increasing, and changes from 1.21 X 10(17) cm(-3) to 1.58 x 10(17) cm(-3); in the radial, the electron density decreases with the distance from axis increasing, and near the tungsten zone the biggest electronic density is off axis.

  19. Intra-rater reliability and agreement of various methods of measurement to assess dorsiflexion in the Weight Bearing Dorsiflexion Lunge Test (WBLT) among female athletes.

    PubMed

    Langarika-Rocafort, Argia; Emparanza, José Ignacio; Aramendi, José F; Castellano, Julen; Calleja-González, Julio

    2017-01-01

    To examine the intra-observer reliability and agreement between five methods of measurement for dorsiflexion during Weight Bearing Dorsiflexion Lunge Test and to assess the degree of agreement between three methods in female athletes. Repeated measurements study design. Volleyball club. Twenty-five volleyball players. Dorsiflexion was evaluated using five methods: heel-wall distance, first toe-wall distance, inclinometer at tibia, inclinometer at Achilles tendon and the dorsiflexion angle obtained by a simple trigonometric function. For the statistical analysis, agreement was studied using the Bland-Altman method, the Standard Error of Measurement and the Minimum Detectable Change. Reliability analysis was performed using the Intraclass Correlation Coefficient. Measurement methods using the inclinometer had more than 6° of measurement error. The angle calculated by trigonometric function had 3.28° error. The reliability of inclinometer based methods had ICC values < 0.90. Distance based methods and trigonometric angle measurement had an ICC values > 0.90. Concerning the agreement between methods, there was from 1.93° to 14.42° bias, and from 4.24° to 7.96° random error. To assess DF angle in WBLT, the angle calculated by a trigonometric function is the most repeatable method. The methods of measurement cannot be used interchangeably. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Is multiple-sequence alignment required for accurate inference of phylogeny?

    PubMed

    Höhl, Michael; Ragan, Mark A

    2007-04-01

    The process of inferring phylogenetic trees from molecular sequences almost always starts with a multiple alignment of these sequences but can also be based on methods that do not involve multiple sequence alignment. Very little is known about the accuracy with which such alignment-free methods recover the correct phylogeny or about the potential for increasing their accuracy. We conducted a large-scale comparison of ten alignment-free methods, among them one new approach that does not calculate distances and a faster variant of our pattern-based approach; all distance-based alignment-free methods are freely available from http://www.bioinformatics.org.au (as Python package decaf+py). We show that most methods exhibit a higher overall reconstruction accuracy in the presence of high among-site rate variation. Under all conditions that we considered, variants of the pattern-based approach were significantly better than the other alignment-free methods. The new pattern-based variant achieved a speed-up of an order of magnitude in the distance calculation step, accompanied by a small loss of tree reconstruction accuracy. A method of Bayesian inference from k-mers did not improve on classical alignment-free (and distance-based) methods but may still offer other advantages due to its Bayesian nature. We found the optimal word length k of word-based methods to be stable across various data sets, and we provide parameter ranges for two different alphabets. The influence of these alphabets was analyzed to reveal a trade-off in reconstruction accuracy between long and short branches. We have mapped the phylogenetic accuracy for many alignment-free methods, among them several recently introduced ones, and increased our understanding of their behavior in response to biologically important parameters. In all experiments, the pattern-based approach emerged as superior, at the expense of higher resource consumption. Nonetheless, no alignment-free method that we examined recovers the correct phylogeny as accurately as does an approach based on maximum-likelihood distance estimates of multiply aligned sequences.

  1. Neutral Kaon Mixing from Lattice QCD

    NASA Astrophysics Data System (ADS)

    Bai, Ziyuan

    In this work, we report the lattice calculation of two important quantities which emerge from second order, K0 - K¯0 mixing : DeltaMK and epsilonK. The RBC-UKQCD collaboration has performed the first calculation of DeltaMK with unphysical kinematics [1]. We now extend this calculation to near-physical and physical ensembles. In these physical or near-physical calculations, the two-pion energies are below the kaon threshold, and we have to examine the two-pion intermediate states contribution to DeltaMK, as well as the enhanced finite volume corrections arising from these two-pion intermediate states. We also report the ?rst lattice calculation of the long-distance contribution to the indirect CP violation parameter, the epsilonK. This calculation involves the treatment of a short-distance, ultra-violet divergence that is absent in the calculation of DeltaMK, and we will report our techniques for correcting this divergence on the lattice. In this calculation, we used unphysical quark masses on the same ensemble that we used in [1]. Therefore, rather than providing a physical result, this calculation demonstrates the technique for calculating epsilonK, and provides an approximate understanding the size of the long-distance contributions. Various new techniques are employed in this work, such as the use of All-Mode-Averaging (AMA), the All-to-All (A2A) propagators and the use of super-jackknife method in analyzing the data.

  2. Trafficking of excitatory amino acid transporter 2- laden vesiclesin cultured astrocytes: a comparison between approximate and exact determination of trajectory angles

    PubMed Central

    Cavender, Chapin E.; Gottipati, Manoj K.; Parpura, Vladimir

    2014-01-01

    A clear consensus concerning the mechanisms of intracellular secretory vesicle trafficking in astrocytes is lacking in the physiological literature. A good characterization of vesicle trafficking that may assist researchers in achieving that goal is the trajectory angle, defined as the angle between the trajectory of a vesicle and a line radial to the cell’s nucleus. In this study, we provide a precise definition of the trajectory angle, describe and compare two methods for its calculation in terms of measureable trafficking parameters, and give recommendations for the appropriate use of each method. We investigated the trafficking of excitatory amino acid transporter 2 (EAAT2) fluorescently tagged with enhanced green fluorescent protein (EGFP) to quantify and validate the usefulness of each method. The motion of fluorescent puncta—taken to represent vesicles containing EAAT2-EGFP—was found to be typical of secretory vesicle trafficking. An exact method for calculating the trajectory angle of these puncta produced no error but required large computation time. An approximate method reduced the requisite computation time but produced an error that depended on the inverse of the ratio of the punctum’s initial distance from the nucleus centroid to its maximal displacement. Fitting this dependence to a power function allowed us to establish an exclusion distance from the centroid, beyond which the approximate method is much less likely to produce an error above acceptable 5 %. We recommend that the exact method be used to calculate the trajectory angle for puncta closer to the nucleus centroid than this exclusion distance. PMID:25408463

  3. The effect of inhomogeneities on the distance to the last scattering surface and the accuracy of the CMB analysis

    NASA Astrophysics Data System (ADS)

    Bolejko, Krzysztof

    2011-02-01

    The standard analysis of the CMB data assumes that the distance to the last scattering surface can be calculated using the distance-redshift relation as in the Friedmann model. However, in the inhomogeneous universe, even if langδρrang = 0, the distance relation is not the same as in the unperturbed universe. This can be of serious consequences as a change of distance affects the mapping of CMB temperature fluctuations into the angular power spectrum Cl. In addition, if the change of distance is relatively uniform no new temperature fluctuations are generated. It is therefore a different effect than the lensing or ISW effects which introduce additional CMB anisotropies. This paper shows that the accuracy of the CMB analysis can be impaired by the accuracy of calculation of the distance within the cosmological models. Since this effect has not been fully explored before, to test how the inhomogeneities affect the distance-redshift relation, several methods are examined: the Dyer-Roeder relation, lensing approximation, and non-linear Swiss-Cheese model. In all cases, the distance to the last scattering surface is different than when homogeneity is assumed. The difference can be as low as 1% and as high as 80%. An usual change of the distance is around 20-30%. Since the distance to the last scattering surface is set by the position of the CMB peaks, in order to have a good fit, the distance needs to be adjusted. After correcting the distance, the cosmological parameters change. Therefore, a not properly estimated distance to the last scattering surface can be a major source of systematics. This paper shows that if inhomogeneities are taken into account when calculating the distance then models with positive spatial curvature and with ΩΛ ~ 0.8-0.9 are preferred.

  4. Empirical Calibration of the P-Factor for Cepheid Radii Determined Using the IR Baade-Wesselink Method

    NASA Astrophysics Data System (ADS)

    Joner, Michael D.; Laney, C. D.

    2012-05-01

    We have used 41 galactic Cepheids for which parallax or cluster/association distances are available, and for which pulsation parallaxes can be calculated, to calibrate the p-factor to be used in K-band Baade-Wesselink radius calculations. Our sample includes the 10 Cepheids from Benedict et al. (2007), and three additional Cepheids with Hipparcos parallaxes derived from van Leeuwen et al. (2007). Turner and Burke (2002) list cluster distances for 33 Cepheids for which radii have been or (in a few cases) can be calculated. Revised cluster distances from Turner (2010), Turner and Majaess (2008, 2012), and Majaess and Turner (2011, 2012a, 2012b) have been used where possible. Radii have been calculated using the methods described in Laney and Stobie (1995) and converted to K-band absolute magnitudes using the methods described in van Leeuwen et al. (2007), Feast et al. (2008), and Laney and Joner (2009). The resulting pulsation parallaxes have been used to estimate the p-factor for each Cepheid. These new results stand in contradiction to those derived by Storm et al. (2011), but are in good agreement with theoretical predictions by Nardetto et al. (2009) and with interferometric estimates of the p-factor, as summarized in Groenewegen (2007). We acknowledge the Brigham Young University College of Physical and Mathematical Sciences for continued support of research done using the facilities and personnel at the West Mountain Observatory. This support is connected with NSF/AST grant #0618209.

  5. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop.

    PubMed

    Li, Lian-Hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.

  6. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop

    PubMed Central

    Li, Lian-hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility. PMID:26414758

  7. An Aggregated Method for Determining Railway Defects and Obstacle Parameters

    NASA Astrophysics Data System (ADS)

    Loktev, Daniil; Loktev, Alexey; Stepanov, Roman; Pevzner, Viktor; Alenov, Kanat

    2018-03-01

    The method of combining algorithms of image blur analysis and stereo vision to determine the distance to objects (including external defects of railway tracks) and the speed of moving objects-obstacles is proposed. To estimate the deviation of the distance depending on the blur a statistical approach, logarithmic, exponential and linear standard functions are used. The statistical approach includes a method of estimating least squares and the method of least modules. The accuracy of determining the distance to the object, its speed and direction of movement is obtained. The paper develops a method of determining distances to objects by analyzing a series of images and assessment of depth using defocusing using its aggregation with stereoscopic vision. This method is based on a physical effect of dependence on the determined distance to the object on the obtained image from the focal length or aperture of the lens. In the calculation of the blur spot diameter it is assumed that blur occurs at the point equally in all directions. According to the proposed approach, it is possible to determine the distance to the studied object and its blur by analyzing a series of images obtained using the video detector with different settings. The article proposes and scientifically substantiates new and improved existing methods for detecting the parameters of static and moving objects of control, and also compares the results of the use of various methods and the results of experiments. It is shown that the aggregate method gives the best approximation to the real distances.

  8. Calculation methods study on hot spot stress of new girder structure detail

    NASA Astrophysics Data System (ADS)

    Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing

    2017-10-01

    To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.

  9. Multidimensional Risk Analysis: MRISK

    NASA Technical Reports Server (NTRS)

    McCollum, Raymond; Brown, Douglas; O'Shea, Sarah Beth; Reith, William; Rabulan, Jennifer; Melrose, Graeme

    2015-01-01

    Multidimensional Risk (MRISK) calculates the combined multidimensional score using Mahalanobis distance. MRISK accounts for covariance between consequence dimensions, which de-conflicts the interdependencies of consequence dimensions, providing a clearer depiction of risks. Additionally, in the event the dimensions are not correlated, Mahalanobis distance reduces to Euclidean distance normalized by the variance and, therefore, represents the most flexible and optimal method to combine dimensions. MRISK is currently being used in NASA's Environmentally Responsible Aviation (ERA) project o assess risk and prioritize scarce resources.

  10. A novel heterogeneous training sample selection method on space-time adaptive processing

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  11. An improved approach to the analysis of drug-protein binding by distance geometry

    NASA Technical Reports Server (NTRS)

    Goldblum, A.; Kieber-Emmons, T.; Rein, R.

    1986-01-01

    The calculation of side chain centers of coordinates and the subsequent generation of side chain-side chain and side chain-backbone distance matrices is suggested as an improved method for viewing interactions inside proteins and for the comparison of protein structures. The use of side chain distance matrices is demonstrated with free PTI, and the use of difference distance matrices for side chains is shown for free and trypsin-bound PTI as well as for the X-ray structures of trypsin complexes with PTI and with benzamidine. It is found that conformational variations are reflected in the side chain distance matrices much more than in the standard C-C distance representations.

  12. Application of the ultrametric distance to portfolio taxonomy. Critical approach and comparison with other methods

    NASA Astrophysics Data System (ADS)

    Skórnik-Pokarowska, Urszula; Orłowski, Arkadiusz

    2004-12-01

    We calculate the ultrametric distance between the pairs of stocks that belong to the same portfolio. The ultrametric distance allows us to distinguish groups of shares that are related. In this way, we can construct a portfolio taxonomy that can be used for constructing an efficient portfolio. We also construct a portfolio taxonomy based not only on stock prices but also on economic indices such as liquidity ratio, debt ratio and sales profitability ratio. We show that a good investment strategy can be obtained by applying to the portfolio chosen by the taxonomy method the so-called Constant Rebalanced Portfolio.

  13. Structure of aqueous cesium metaborate solutions by X-ray scattering and DFT calculation

    NASA Astrophysics Data System (ADS)

    Zhang, W. Q.; Fang, C. H.; Fang, Y.; Zhu, F. Y.; Zhou, Y. Q.; Liu, H. Y.; Li, W.

    2018-05-01

    In the present work, precise radial distribution function (RDF) of cesium metaborate solutions with salt-water molar ratio of 1:25, 1:30 and 1:35 in large scattering vector range (3.91-214.26 nm-1) were obtained by X-ray scattering. Polyborate species were given using Newton iteration method with measured pH and literature equilibrium constants. In model calculation, structural parameters such as the coordination number, interatomic distance and Debye-Waller factor were given through model calculation. The B-O(H2O) distance was determined to be ∼0.37 nm with the hydration number of ∼7.8 for B(OH)4-. The Cs-B distance of the contact ions CsB(OH)40 was measured to be ∼0.46 nm with interaction number of ∼0.77. The interaction distances and coordination number for the first shell and the second shell of Cs-O(W) are ∼0.325 nm, ∼0.517 nm and ∼8.0, ∼11, respectively. Five low-energy configurations of [Cs(H2O)8]+ were given with DFT calculation, including the first and the second hydration shell, and the most stable eight-coordinated one is close to the model calculation. Furthermore, the effect of concentration is discussed in the X-ray scattering analysis part, showing that hydration degree changes with the concentration. For the coordination number and distance of Cs-O(H2O) and H-bonding decrease with the increasing concentration. The coordination number of Cs-O(H2O) keep stable, and the coordination distance changes from 3.25 nm to 3.30 nm. For H-bonding, which the coordination number varies from 2.20 to 2.24, and the coordination distance varies from 2.76 nm to 2.78 nm with the decreasing concentration.

  14. Method and apparatus for making absolute range measurements

    DOEpatents

    Allison, Stephen W.; Cates, Michael R.; Key, William S.; Sanders, Alvin J.; Earl, Dennis D.

    1999-01-01

    This invention relates to a method and apparatus for making absolute distance or ranging measurements using Fresnel diffraction. The invention employs a source of electromagnetic radiation having a known wavelength or wavelength distribution, which sends a beam of electromagnetic radiation through an object which causes it to be split (hereinafter referred to as a "beamsplitter"), and then to a target. The beam is reflected from the target onto a screen containing an aperture spaced a known distance from the beamsplitter. The aperture is sized so as to produce a Fresnel diffraction pattern. A portion of the beam travels through the aperture to a detector, spaced a known distance from the screen. The detector detects the central intensity of the beam. The distance from the object which causes the beam to be split to the target can then be calculated based upon the known wavelength, aperture radius, beam intensity, and distance from the detector to the screen. Several apparatus embodiments are disclosed for practicing the method embodiments of the present invention.

  15. Method and apparatus for making absolute range measurements

    DOEpatents

    Allison, S.W.; Cates, M.R.; Key, W.S.; Sanders, A.J.; Earl, D.D.

    1999-06-22

    This invention relates to a method and apparatus for making absolute distance or ranging measurements using Fresnel diffraction. The invention employs a source of electromagnetic radiation having a known wavelength or wavelength distribution, which sends a beam of electromagnetic radiation through an object which causes it to be split (hereinafter referred to as a beam splitter''), and then to a target. The beam is reflected from the target onto a screen containing an aperture spaced a known distance from the beam splitter. The aperture is sized so as to produce a Fresnel diffraction pattern. A portion of the beam travels through the aperture to a detector, spaced a known distance from the screen. The detector detects the central intensity of the beam. The distance from the object which causes the beam to be split to the target can then be calculated based upon the known wavelength, aperture radius, beam intensity, and distance from the detector to the screen. Several apparatus embodiments are disclosed for practicing the method embodiments of the present invention. 9 figs.

  16. Distance Determination by Gated Viewing Systems Taking into Account the Illuminating Pulse Shape

    NASA Astrophysics Data System (ADS)

    Gorobets, V. A.; Kuntsevich, B. F.; Shabrov, D. V.

    2017-11-01

    For gated viewing systems with triangular and trapezoidal illuminating pulses, we have obtained the range-intensity profiles (RIPs) of the signal as the time delay was varied between the leading edges of the gate pulse and the illuminating pulse. We have established that if the duration of the illuminating pulse Δtlas is less than or equal to the duration of the gate pulse ΔtIC, then the expressions for the characteristic distances are the same as for rectangular pulses and they can be used to determine the distance to objects. When Δtlas > ΔtIC, in the case of triangular illuminating pulses the RIP is bell-shaped. For trapezoidal pulses, the RIP is bell-shaped with or without a plateau section. We propose an empirical method for determining the characteristic distances to the RIP maximum and the boundary points for the plateau section, which we then use to calculate the distance to the object. Using calibration constants, we propose a method for determining the distance to an object and we have experimentally confirmed the feasibility of this method.

  17. Method of assessing the state of a rolling bearing based on the relative compensation distance of multiple-domain features and locally linear embedding

    NASA Astrophysics Data System (ADS)

    Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.

    2017-03-01

    To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.

  18. Exact geodesic distances in FLRW spacetimes

    NASA Astrophysics Data System (ADS)

    Cunningham, William J.; Rideout, David; Halverson, James; Krioukov, Dmitri

    2017-11-01

    Geodesics are used in a wide array of applications in cosmology and astrophysics. However, it is not a trivial task to efficiently calculate exact geodesic distances in an arbitrary spacetime. We show that in spatially flat (3 +1 )-dimensional Friedmann-Lemaître-Robertson-Walker (FLRW) spacetimes, it is possible to integrate the second-order geodesic differential equations, and derive a general method for finding both timelike and spacelike distances given initial-value or boundary-value constraints. In flat spacetimes with either dark energy or matter, whether dust, radiation, or a stiff fluid, we find an exact closed-form solution for geodesic distances. In spacetimes with a mixture of dark energy and matter, including spacetimes used to model our physical universe, there exists no closed-form solution, but we provide a fast numerical method to compute geodesics. A general method is also described for determining the geodesic connectedness of an FLRW manifold, provided only its scale factor.

  19. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  20. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  1. Experimental Verification of the Streamline Curvature Numerical Analysis Method Applied to the Flow through an Axial Flow Fan.

    DTIC Science & Technology

    1980-05-28

    Total Deviation Angles and Measured Inlet Axial Velocity . . . . 55 ix LIST OF FIGURES (Continued) Figure Page 19 Points Defining Blade Sections of...distance from leading edge to point of maximum camber along chord line ar tip vortex core radius AVR axial velocity ratio (Vx /V x c chord length CL tip...yaw ceoefficient d longitudinal distance from leading edge to tip vortex calculation point G distance from chord line to maximum camber point K cascade

  2. Differences in the Stimulus Accommodative Convergence/Accommodation Ratio using Various Techniques and Accommodative Stimuli.

    PubMed

    Satou, Tsukasa; Ito, Misae; Shinomiya, Yuma; Takahashi, Yoshiaki; Hara, Naoto; Niida, Takahiro

    2018-04-04

    To investigate differences in the stimulus accommodative convergence/accommodation (AC/A) ratio using various techniques and accommodative stimuli, and to describe a method for determining the stimulus AC/A ratio. A total of 81 subjects with a mean age of 21 years (range, 20-23 years) were enrolled. The relationship between ocular deviation and accommodation was assessed using two methods. Ocular deviation was measured by varying the accommodative requirement using spherical plus/minus lenses to create an accommodative stimulus of 10.00 diopters (D) (in 1.00 D steps). Ocular deviation was assessed using the alternate prism cover test in method 1 at distance (5 m) and near (1/3 m), and the major amblyoscope in method 2. The stimulus AC/A ratios obtained using methods 1 and 2 were calculated and defined as the stimulus AC/A ratios with low and high accommodation, respectively, using the following analysis method. The former was calculated as the difference between the convergence response to an accommodative stimulus of 3 D and 0 D, divided by 3. The latter was calculated as the difference between the convergence response to a maximum (max) accommodative stimulus with distinct vision of the subject and an accommodative stimulus of max minus 3.00 D, divided by 3. The median stimulus AC/A ratio with low accommodation (1.0 Δ/D for method 1 at distance, 2.0 Δ/D for method 1 at near, and 2.7 Δ/D for method 2) differed significantly among the measurement methods (P < 0.01). Differences in the median stimulus AC/A ratio with high accommodation (4.0 Δ/D for method 1 at distance, 3.7 Δ/D for method 1 at near, and 4.7 Δ/D for method 2) between method 1 at distance and method 2 were statistically significant (P < 0.05), while method 1 at near was not significantly different compared with other methods. Differences in the stimulus AC/A ratio value were significant according to measurement technique and accommodative stimuli. However, differences caused by measurement technique may be reduced by using a high accommodative stimulus during measurements.

  3. STScI-PRC96-21b DISTANCE MEASUREMENTS TO A TYPE-IA SUPERNOVA BEARING GALAXY

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This Hubble Space Telescope image shows NGC 4639, a spiral galaxy located 78 million light-years away in the Virgo cluster of galaxies. The blue dots in the galaxy's outlying regions indicate the presence of young stars. Among them are young, bright stars called Cepheids, which are used as reliable milepost markers to obtain accurate distances to nearby galaxies. Astronomers measure the brightness of Cepheids to calculate the distance to a galaxy. Allan Sandage's team used Cepheids to measure the distance to NGC 4639, the farthest galaxy to which Cepheid distance has been calculated. After using Cepheids to calculate the distance to NGC 4639, the team compared the results to the peak brightness measurements of SN 1990N, a type Ia supernova located in the galaxy. Then they compared those numbers with the peak brightness of supernovae similarly calibrated in nearby galaxies. The team then determined that type Ia supernovae are reliable secondary distance markers, and can be used to determine distances to galaxies several hundred times farther away than Cepheids. An accurate value for the Hubble Constant depends on Cepheids and secondary distance methods. The color image was made from separate exposures taken in the visible and near-infrared regions of the spectrum with the Wide Field Planetary Camera 2. Credit: A. Sandage (Carnegie Observatories), A. Saha (Space Telescope Science Institute), G.A. Tammann, and L. Labhardt (Astronomical Institute, University Basel), F.D. Macchetto and N. Panagia (Space Telescope Science Institute/ European Space Agency), and NASA Image files in GIF and JPEG format and captions may be accessed on Internet via anonymous ftp from oposite.stsci.edu in /pubinfo.

  4. A flexible new method for 3D measurement based on multi-view image sequences

    NASA Astrophysics Data System (ADS)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  5. Structural model of dioxouranium(VI) with hydrazono ligands.

    PubMed

    Mubarak, Ahmed T

    2005-04-01

    Synthesis and characterization of several new coordination compounds of dioxouranium(VI) heterochelates with bidentate hydrazono compounds derived from 1-phenyl-3-methyl-5-pyrazolone are described. The ligands and uranayl complexes have been characterized by various physico-chemical techniques. The bond lengths and the force constant have been calculated from asymmetric stretching frequency of OUO groups. The infrared spectral studies showed a monobasic bidentate behaviour with the oxygen and hydrazo nitrogen donor system. The effect of Hammett's constant on the bond distances and the force constants were also discussed and drawn. Wilson's matrix method, Badger's formula, Jones and El-Sonbati equations were used to determine the stretching and interaction force constant from which the UO bond distances were calculated. The bond distances of these complexes were also investigated.

  6. Structural model of dioxouranium(VI) with hydrazono ligands

    NASA Astrophysics Data System (ADS)

    Mubarak, Ahmed T.

    2005-04-01

    Synthesis and characterization of several new coordination compounds of dioxouranium(VI) heterochelates with bidentate hydrazono compounds derived from 1-phenyl-3-methyl-5-pyrazolone are described. The ligands and uranayl complexes have been characterized by various physico-chemical techniques. The bond lengths and the force constant have been calculated from asymmetric stretching frequency of O sbnd U sbnd O groups. The infrared spectral studies showed a monobasic bidentate behaviour with the oxygen and hydrazo nitrogen donor system. The effect of Hammett's constant on the bond distances and the force constants were also discussed and drawn. Wilson's matrix method, Badger's formula, Jones and El-Sonbati equations were used to determine the stretching and interaction force constant from which the U sbnd O bond distances were calculated. The bond distances of these complexes were also investigated.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James; Kuruganti, Teja

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  8. Distance between configurations in Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Fukuma, Masafumi; Matsumoto, Nobuyuki; Umeda, Naoya

    2017-12-01

    For a given Markov chain Monte Carlo algorithm we introduce a distance between two configurations that quantifies the difficulty of transition from one configuration to the other configuration. We argue that the distance takes a universal form for the class of algorithms which generate local moves in the configuration space. We explicitly calculate the distance for the Langevin algorithm, and show that it certainly has desired and expected properties as distance. We further show that the distance for a multimodal distribution gets dramatically reduced from a large value by the introduction of a tempering method. We also argue that, when the original distribution is highly multimodal with large number of degenerate vacua, an anti-de Sitter-like geometry naturally emerges in the extended configuration space.

  9. Simulation of rare events in quantum error correction

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Vargo, Alexander

    2013-12-01

    We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.

  10. Long term estimations of low frequency noise levels over water from an off-shore wind farm.

    PubMed

    Bolin, Karl; Almgren, Martin; Ohlsson, Esbjörn; Karasalo, Ilkka

    2014-03-01

    This article focuses on computations of low frequency sound propagation from an off-shore wind farm. Two different methods for sound propagation calculations are combined with meteorological data for every 3 hours in the year 2010 to examine the varying noise levels at a reception point at 13 km distance. It is shown that sound propagation conditions play a vital role in the noise impact from the off-shore wind farm and ordinary assessment methods can become inaccurate at longer propagation distances over water. Therefore, this paper suggests that methodologies to calculate noise immission with realistic sound speed profiles need to be combined with meteorological data over extended time periods to evaluate the impact of low frequency noise from modern off-shore wind farms.

  11. Method of determining the orbits of the small bodies in the solar system based on an exhaustive search of orbital planes

    NASA Astrophysics Data System (ADS)

    Bondarenko, Yu. S.; Vavilov, D. E.; Medvedev, Yu. D.

    2014-05-01

    A universal method of determining the orbits of newly discovered small bodies in the Solar System using their positional observations has been developed. The proposed method suggests determining geocentric distances of a small body by means of an exhaustive search for heliocentric orbital planes and subsequent determination of the distance between the observer and the points at which the chosen plane intersects with the vectors pointing to the object. Further, the remaining orbital elements are determined using the classical Gauss method after eliminating those heliocentric distances that have a fortiori low probabilities. The obtained sets of elements are used to determine the rms between the observed and calculated positions. The sets of elements with the least rms are considered to be most probable for newly discovered small bodies. Afterwards, these elements are improved using the differential method.

  12. An iterative method for the localization of a neutron source in a large box (container)

    NASA Astrophysics Data System (ADS)

    Dubinski, S.; Presler, O.; Alfassi, Z. B.

    2007-12-01

    The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.

  13. The computer coordination method and research of inland river traffic based on ship database

    NASA Astrophysics Data System (ADS)

    Liu, Shanshan; Li, Gen

    2018-04-01

    A computer coordinated management method for inland river ship traffic is proposed in this paper, Get the inland ship's position, speed and other navigation information by VTS, building ship's statics and dynamic data bases, writing a program of computer coordinated management of inland river traffic by VB software, Automatic simulation and calculation of the meeting states of ships, Providing ship's long-distance collision avoidance information. The long-distance collision avoidance of ships will be realized. The results show that, Ships avoid or reduce meetings, this method can effectively control the macro collision avoidance of ships.

  14. A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks

    NASA Astrophysics Data System (ADS)

    Haijun, Xiong; Qi, Zhang

    2016-08-01

    Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.

  15. Adaptive density trajectory cluster based on time and space distance

    NASA Astrophysics Data System (ADS)

    Liu, Fagui; Zhang, Zhijie

    2017-10-01

    There are some hotspot problems remaining in trajectory cluster for discovering mobile behavior regularity, such as the computation of distance between sub trajectories, the setting of parameter values in cluster algorithm and the uncertainty/boundary problem of data set. As a result, based on the time and space, this paper tries to define the calculation method of distance between sub trajectories. The significance of distance calculation for sub trajectories is to clearly reveal the differences in moving trajectories and to promote the accuracy of cluster algorithm. Besides, a novel adaptive density trajectory cluster algorithm is proposed, in which cluster radius is computed through using the density of data distribution. In addition, cluster centers and number are selected by a certain strategy automatically, and uncertainty/boundary problem of data set is solved by designed weighted rough c-means. Experimental results demonstrate that the proposed algorithm can perform the fuzzy trajectory cluster effectively on the basis of the time and space distance, and obtain the optimal cluster centers and rich cluster results information adaptably for excavating the features of mobile behavior in mobile and sociology network.

  16. Standard operating procedure for calculating genome-to-genome distances based on high-scoring segment pairs.

    PubMed

    Auch, Alexander F; Klenk, Hans-Peter; Göker, Markus

    2010-01-28

    DNA-DNA hybridization (DDH) is a widely applied wet-lab technique to obtain an estimate of the overall similarity between the genomes of two organisms. To base the species concept for prokaryotes ultimately on DDH was chosen by microbiologists as a pragmatic approach for deciding about the recognition of novel species, but also allowed a relatively high degree of standardization compared to other areas of taxonomy. However, DDH is tedious and error-prone and first and foremost cannot be used to incrementally establish a comparative database. Recent studies have shown that in-silico methods for the comparison of genome sequences can be used to replace DDH. Considering the ongoing rapid technological progress of sequencing methods, genome-based prokaryote taxonomy is coming into reach. However, calculating distances between genomes is dependent on multiple choices for software and program settings. We here provide an overview over the modifications that can be applied to distance methods based in high-scoring segment pairs (HSPs) or maximally unique matches (MUMs) and that need to be documented. General recommendations on determining HSPs using BLAST or other algorithms are also provided. As a reference implementation, we introduce the GGDC web server (http://ggdc.gbdp.org).

  17. An accelerated hologram calculation using the wavefront recording plane method and wavelet transform

    NASA Astrophysics Data System (ADS)

    Arai, Daisuke; Shimobaba, Tomoyoshi; Nishitsuji, Takashi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi

    2017-06-01

    Fast hologram calculation methods are critical in real-time holography applications such as three-dimensional (3D) displays. We recently proposed a wavelet transform-based hologram calculation called WASABI. Even though WASABI can decrease the calculation time of a hologram from a point cloud, it increases the calculation time with increasing propagation distance. We also proposed a wavefront recoding plane (WRP) method. This is a two-step fast hologram calculation in which the first step calculates the superposition of light waves emitted from a point cloud in a virtual plane, and the second step performs a diffraction calculation from the virtual plane to the hologram plane. A drawback of the WRP method is in the first step when the point cloud has a large number of object points and/or a long distribution in the depth direction. In this paper, we propose a method combining WASABI and the WRP method in which the drawbacks of each can be complementarily solved. Using a consumer CPU, the proposed method succeeded in performing a hologram calculation with 2048 × 2048 pixels from a 3D object with one million points in approximately 0.4 s.

  18. Supertrees Based on the Subtree Prune-and-Regraft Distance

    PubMed Central

    Whidden, Christopher; Zeh, Norbert; Beiko, Robert G.

    2014-01-01

    Supertree methods reconcile a set of phylogenetic trees into a single structure that is often interpreted as a branching history of species. A key challenge is combining conflicting evolutionary histories that are due to artifacts of phylogenetic reconstruction and phenomena such as lateral gene transfer (LGT). Many supertree approaches use optimality criteria that do not reflect underlying processes, have known biases, and may be unduly influenced by LGT. We present the first method to construct supertrees by using the subtree prune-and-regraft (SPR) distance as an optimality criterion. Although calculating the rooted SPR distance between a pair of trees is NP-hard, our new maximum agreement forest-based methods can reconcile trees with hundreds of taxa and > 50 transfers in fractions of a second, which enables repeated calculations during the course of an iterative search. Our approach can accommodate trees in which uncertain relationships have been collapsed to multifurcating nodes. Using a series of benchmark datasets simulated under plausible rates of LGT, we show that SPR supertrees are more similar to correct species histories than supertrees based on parsimony or Robinson–Foulds distance criteria. We successfully constructed an SPR supertree from a phylogenomic dataset of 40,631 gene trees that covered 244 genomes representing several major bacterial phyla. Our SPR-based approach also allowed direct inference of highways of gene transfer between bacterial classes and genera. A Small number of these highways connect genera in different phyla and can highlight specific genes implicated in long-distance LGT. [Lateral gene transfer; matrix representation with parsimony; phylogenomics; prokaryotic phylogeny; Robinson–Foulds; subtree prune-and-regraft; supertrees.] PMID:24695589

  19. Fluorescence quenching by TEMPO: a sub-30 A single-molecule ruler.

    PubMed

    Zhu, Peizhi; Clamme, Jean-Pierre; Deniz, Ashok A

    2005-11-01

    A series of DNA molecules labeled with 5-carboxytetramethylrhodamine (5-TAMRA) and the small nitroxide radical TEMPO were synthesized and tested to investigate whether the intramolecular quenching efficiency can be used to measure short intramolecular distances in small ensemble and single-molecule experiments. In combination with distance calculations using molecular mechanics modeling, the experimental results from steady-state ensemble fluorescence and fluorescence correlation spectroscopy measurements both show an exponential decrease in the quenching rate constant with the dye-quencher distance in the 10-30 A range. The results demonstrate that TEMPO-5-TAMRA fluorescence quenching is a promising method to measure short distance changes within single biomolecules.

  20. A submerged singularity method for calculating potential flow velocities at arbitrary near-field points

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1976-01-01

    A discrete singularity method has been developed for calculating the potential flow around two-dimensional airfoils. The objective was to calculate velocities at any arbitrary point in the flow field, including points that approach the airfoil surface. That objective was achieved and is demonstrated here on a Joukowski airfoil. The method used combined vortices and sources ''submerged'' a small distance below the airfoil surface and incorporated a near-field subvortex technique developed earlier. When a velocity calculation point approached the airfoil surface, the number of discrete singularities effectively increased (but only locally) to keep the point just outside the error region of the submerged singularity discretization. The method could be extended to three dimensions, and should improve nonlinear methods, which calculate interference effects between multiple wings, and which include the effects of force-free trailing vortex sheets. The capability demonstrated here would extend the scope of such calculations to allow the close approach of wings and vortex sheets (or vortices).

  1. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  2. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  3. Integrating concepts and skills: Slope and kinematics graphs

    NASA Astrophysics Data System (ADS)

    Tonelli, Edward P., Jr.

    The concept of force is a foundational idea in physics. To predict the results of applying forces to objects, a student must be able to interpret data representing changes in distance, time, speed, and acceleration. Comprehension of kinematics concepts requires students to interpret motion graphs, where rates of change are represented as slopes of line segments. Studies have shown that majorities of students who show proficiency with mathematical concepts fail accurately to interpret motion graphs. The primary aim of this study was to examine how students apply their knowledge of slope when interpreting kinematics graphs. To answer the research questions a mixed methods research design, which included a survey and interviews, was adopted. Ninety eight (N=98) high school students completed surveys which were quantitatively analyzed along with qualitative information collected from interviews of students (N=15) and teachers ( N=2). The study showed that students who recalled methods for calculating slopes and speeds calculated slopes accurately, but calculated speeds inaccurately. When comparing the slopes and speeds, most students resorted to calculating instead of visual inspection. Most students recalled and applied memorized rules. Students who calculated slopes and speeds inaccurately failed to recall methods of calculating slopes and speeds, but when comparing speeds, these students connected the concepts of distance and time to the line segments and the rates of change they represented. This study's findings will likely help mathematics and science educators to better assist their students to apply their knowledge of the definition of slope and skills in kinematics concepts.

  4. Benchmarking Distance Control and Virtual Drilling for Lateral Skull Base Surgery.

    PubMed

    Voormolen, Eduard H J; Diederen, Sander; van Stralen, Marijn; Woerdeman, Peter A; Noordmans, Herke Jan; Viergever, Max A; Regli, Luca; Robe, Pierre A; Berkelbach van der Sprenkel, Jan Willem

    2018-01-01

    Novel audiovisual feedback methods were developed to improve image guidance during skull base surgery by providing audiovisual warnings when the drill tip enters a protective perimeter set at a distance around anatomic structures ("distance control") and visualizing bone drilling ("virtual drilling"). To benchmark the drill damage risk reduction provided by distance control, to quantify the accuracy of virtual drilling, and to investigate whether the proposed feedback methods are clinically feasible. In a simulated surgical scenario using human cadavers, 12 unexperienced users (medical students) drilled 12 mastoidectomies. Users were divided into a control group using standard image guidance and 3 groups using distance control with protective perimeters of 1, 2, or 3 mm. Damage to critical structures (sigmoid sinus, semicircular canals, facial nerve) was assessed. Neurosurgeons performed another 6 mastoidectomy/trans-labyrinthine and retro-labyrinthine approaches. Virtual errors as compared with real postoperative drill cavities were calculated. In a clinical setting, 3 patients received lateral skull base surgery with the proposed feedback methods. Users drilling with distance control protective perimeters of 3 mm did not damage structures, whereas the groups using smaller protective perimeters and the control group injured structures. Virtual drilling maximum cavity underestimations and overestimations were 2.8 ± 0.1 and 3.3 ± 0.4 mm, respectively. Feedback methods functioned properly in the clinical setting. Distance control reduced the risks of drill damage proportional to the protective perimeter distance. Errors in virtual drilling reflect spatial errors of the image guidance system. These feedback methods are clinically feasible. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Accuracy evaluation of distance inverse square law in determining virtual electron source location in Siemens Primus linac.

    PubMed

    Douk, Hamid Shafaei; Aghamiri, Mahmoud Reza; Ghorbani, Mahdi; Farhood, Bagher; Bakhshandeh, Mohsen; Hemmati, Hamid Reza

    2018-01-01

    The aim of this study is to evaluate the accuracy of the inverse square law (ISL) method for determining location of virtual electron source ( S Vir ) in Siemens Primus linac. So far, different experimental methods have presented for determining virtual and effective electron source location such as Full Width at Half Maximum (FWHM), Multiple Coulomb Scattering (MCS), and Multi Pinhole Camera (MPC) and Inverse Square Law (ISL) methods. Among these methods, Inverse Square Law is the most common used method. Firstly, Siemens Primus linac was simulated using MCNPX Monte Carlo code. Then, by using dose profiles obtained from the Monte Carlo simulations, the location of S Vir was calculated for 5, 7, 8, 10, 12 and 14 MeV electron energies and 10 cm × 10 cm, 15 cm × 15 cm, 20 cm × 20 cm and 25 cm × 25 cm field sizes. Additionally, the location of S Vir was obtained by the ISL method for the mentioned electron energies and field sizes. Finally, the values obtained by the ISL method were compared to the values resulted from Monte Carlo simulation. The findings indicate that the calculated S Vir values depend on beam energy and field size. For a specific energy, with increase of field size, the distance of S Vir increases for most cases. Furthermore, for a special applicator, with increase of electron energy, the distance of S Vir increases for most cases. The variation of S Vir values versus change of field size in a certain energy is more than the variation of S Vir values versus change of electron energy in a certain field size. According to the results, it is concluded that the ISL method can be considered as a good method for calculation of S Vir location in higher electron energies (14 MeV).

  6. Navy Nuclear Aircraft Carrier (CVN) Homeporting at Mayport: Background and Issues for Congress

    DTIC Science & Technology

    2011-04-15

    about 32 nautical miles is the straight-line distance between the two locations, as calculated by the “How Fair Is It?” online distance calculator...distance between the two locations, as calculated by the “How Fair Is It?” online distance calculator, available at http://www.indo.com/cgi-bin/dist...Section 2207 of the FY2009 defense authorization bill as passed by the House (H.R. 5658; H.Rept. 110-652 of May 16, 2008) stated: SEC. 2207

  7. Navy Nuclear Aircraft Carrier (CVN) Homeporting at Mayport: Background and Issues for Congress

    DTIC Science & Technology

    2010-12-09

    Release No. 233-09 of April 10, 2009, entitled “Quadrennial Defense Review To Determine Aircraft Carrier Homeporting In Mayport,” available online at...is the straight-line distance between the two locations, as calculated by the “How Fair Is It?” online distance calculator available at http...straight-line distance between the two locations, as calculated by the “How Fair Is It?” online distance calculator available at http://www.indo.com/cgi

  8. An Eulerian/Lagrangian method for computing blade/vortex impingement

    NASA Technical Reports Server (NTRS)

    Steinhoff, John; Senge, Heinrich; Yonghu, Wenren

    1991-01-01

    A combined Eulerian/Lagrangian approach to calculating helicopter rotor flows with concentrated vortices is described. The method computes a general evolving vorticity distribution without any significant numerical diffusion. Concentrated vortices can be accurately propagated over long distances on relatively coarse grids with cores only several grid cells wide. The method is demonstrated for a blade/vortex impingement case in 2D and 3D where a vortex is cut by a rotor blade, and the results are compared to previous 2D calculations involving a fifth-order Navier-Stokes solver on a finer grid.

  9. Application of Multifunctional Doppler LIDAR for Noncontact Track Speed, Distance, and Curvature Assessment

    NASA Astrophysics Data System (ADS)

    Munoz, Joshua

    The primary focus of this research is evaluation of feasibility, applicability, and accuracy of Doppler Light Detection And Ranging (LIDAR) sensors as non-contact means for measuring track speed, distance traveled, and curvature. Speed histories, currently measured with a rotary, wheelmounted encoder, serve a number of useful purposes, one significant use involving derailment investigations. Distance calculation provides a spatial reference system for operators to locate track sections of interest. Railroad curves, using an IMU to measure curvature, are monitored to maintain track infrastructure within regulations. Speed measured with high accuracy leads to highfidelity distance and curvature data through utilization of processor clock rate and left-and rightrail speed differentials during curve navigation, respectively. Wheel-mounted encoders, or tachometers, provide a relatively low-resolution speed profile, exhibit increased noise with increasing speed, and are subject to the inertial behavior of the rail car which affects output data. The IMU used to measure curvature is dependent on acceleration and yaw rate sensitivity and experiences difficulty in low-speed conditions. Preliminary system tests onboard a "Hy-Rail" utility vehicle capable of traveling on rail show speed capture is possible using the rails as the reference moving target and furthermore, obtaining speed profiles from both rails allows for the calculation of speed differentials in curves to estimate degrees curvature. Ground truth distance calibration and curve measurement were also carried out. Distance calibration involved placement of spatial landmarks detected by a sensor to synchronize distance measurements as a pre-processing procedure. Curvature ground truth measurements provided a reference system to confirm measurement results and observe alignment variation throughout a curve. Primary testing occurred onboard a track geometry rail car, measuring rail speed over substantial mileage in various weather conditions, providing highaccuracy data to further calculate distance and curvature along the test routes. Tests results indicate the LIDAR system measures speed at higher accuracy than the encoder, absent of noise influenced by increasing speed. Distance calculation is also high in accuracy, results showing high correlation with encoder and ground truth data. Finally, curvature calculation using speed data is shown to have good correlation with IMU measurements and a resolution capable of revealing localized track alignments. Further investigations involve a curve measurement algorithm and speed calibration method independent from external reference systems, namely encoder and ground truth data. The speed calibration results show a high correlation with speed data from the track geometry vehicle. It is recommended that the study be extended to provide assessment of the LIDAR's sensitivity to car body motion in order to better isolate the embedded behavior in the speed and curvature profiles. Furthermore, in the interest of progressing the system toward a commercially viable unit, methods for self-calibration and pre-processing to allow for fully independent operation is highly encouraged.

  10. Kinematics of our Galaxy from the PMA and TGAS catalogues

    NASA Astrophysics Data System (ADS)

    Velichko, Anna B.; Akhmetov, Volodymyr S.; Fedorov, Peter N.

    2018-04-01

    We derive and compare kinematic parameters of the Galaxy using the PMA and Gaia TGAS data. Two methods are used in calculations: evaluation of the Ogorodnikov-Milne model (OMM) parameters by the least square method (LSM) and a decomposition on a set of vector spherical harmonics (VSH). We trace dependencies on the distance of the derived parameters including the Oort constants A and B and the rotational velocity of the Galaxy V rot at the Solar distance for the common sample of stars of mixed spectral composition of the PMA and TGAS catalogues. The distances were obtained from the TGAS parallaxes or from reduced proper motions for fainter stars. The A, B and V rot parameters derived from proper motions of both catalogues used show identical behaviour but the values are systematically shifted by about 0.5 mas/yr. The Oort B parameter derived from the PMA sample of red giants shows gradual decrease with increasing the distance while the Oort A has a minimum at about 2 kpc and then gradually increases. As for models chosen for calculations, first, we confirm conclusions of other authors about the existence of extra-model harmonics in the stellar velocity field. Secondly, not all parameters of the OMM are statistically significant, and the set of parameters depends on the stellar sample used.

  11. Method and apparatus for making absolute range measurements

    DOEpatents

    Earl, Dennis D [Knoxville, TN; Allison, Stephen W [Knoxville, TN; Cates, Michael R [Oak Ridge, TN; Sanders, Alvin J [Knoxville, TN

    2002-09-24

    This invention relates to a method and apparatus for making absolute distance or ranging measurements using Fresnel diffraction. The invention employs a source of electromagnetic radiation having a known wavelength or wavelength distribution, which sends a beam of electromagnetic radiation through a screen at least partially opaque at the wavelength. The screen has an aperture sized so as to produce a Fresnel diffraction pattern. A portion of the beam travels through the aperture to a detector spaced some distance from the screen. The detector detects the central intensity of the beam as well as a set of intensities displaced from a center of the aperture. The distance from the source to the target can then be calculated based upon the known wavelength, aperture radius, and beam intensity.

  12. A fast forward algorithm for real-time geosteering of azimuthal gamma-ray logging.

    PubMed

    Qin, Zhen; Pan, Heping; Wang, Zhonghao; Wang, Bintao; Huang, Ke; Liu, Shaohua; Li, Gang; Amara Konaté, Ahmed; Fang, Sinan

    2017-05-01

    Geosteering is an effective method to increase the reservoir drilling rate in horizontal wells. Based on the features of an azimuthal gamma-ray logging tool and strata spatial location, a fast forward calculation method of azimuthal gamma-ray logging is deduced by using the natural gamma ray distribution equation in formation. The response characteristics of azimuthal gamma-ray logging while drilling in the layered formation models with different thickness and position are simulated and summarized by using the method. The result indicates that the method calculates quickly, and when the tool nears a boundary, the method can be used to identify the boundary and determine the distance from the logging tool to the boundary in time. Additionally, the formation parameters of the algorithm in the field can be determined after a simple method is proposed based on the information of an offset well. Therefore, the forward method can be used for geosteering in the field. A field example validates that the forward method can be used to determine the distance from the azimuthal gamma-ray logging tool to the boundary for geosteering in real-time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    NASA Astrophysics Data System (ADS)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  14. Molecular phylogenetic trees - On the validity of the Goodman-Moore augmentation algorithm

    NASA Technical Reports Server (NTRS)

    Holmquist, R.

    1979-01-01

    A response is made to the reply of Nei and Tateno (1979) to the letter of Holmquist (1978) supporting the validity of the augmentation algorithm of Moore (1977) in reconstructions of nucleotide substitutions by means of the maximum parsimony principle. It is argued that the overestimation of the augmented numbers of nucleotide substitutions (augmented distances) found by Tateno and Nei (1978) is due to an unrepresentative data sample and that it is only necessary that evolution be stochastically uniform in different regions of the phylogenetic network for the augmentation method to be useful. The importance of the average value of the true distance over all links is explained, and the relative variances of the true and augmented distances are calculated to be almost identical. The effects of topological changes in the phylogenetic tree on the augmented distance and the question of the correctness of ancestral sequences inferred by the method of parsimony are also clarified.

  15. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  16. Distance measurement using frequency scanning interferometry with mode-hoped laser

    NASA Astrophysics Data System (ADS)

    Medhat, M.; Sobee, M.; Hussein, H. M.; Terra, O.

    2016-06-01

    In this paper, frequency scanning interferometry is implemented to measure distances up to 5 m absolutely. The setup consists of a Michelson interferometer, an external cavity tunable diode laser, and an ultra-low expansion (ULE) Fabry-Pérot (FP) cavity to measure the frequency scanning range. The distance is measured by acquiring simultaneously the interference fringes from, the Michelson and the FP interferometers, while scanning the laser frequency. An online fringe processing technique is developed to calculate the distance from the fringe ratio while removing the parts result from the laser mode-hops without significantly affecting the measurement accuracy. This fringe processing method enables accurate distance measurements up to 5 m with measurements repeatability ±3.9×10-6 L. An accurate translation stage is used to find the FP cavity free-spectral-range and therefore allow accurate measurement. Finally, the setup is applied for the short distance calibration of a laser distance meter (LDM).

  17. On Calculating the Zero-Gravity Surface Figure of a Mirror

    NASA Technical Reports Server (NTRS)

    Bloemhof, Eric E.

    2010-01-01

    An analysis of the classical method of calculating the zero-gravity surface figure of a mirror from surface-figure measurements in the presence of gravity has led to improved understanding of conditions under which the calculations are valid. In this method, one measures the surface figure in two or more gravity- reversed configurations, then calculates the zero-gravity surface figure as the average of the surface figures determined from these measurements. It is now understood that gravity reversal is not, by itself, sufficient to ensure validity of the calculations: It is also necessary to reverse mounting forces, for which purpose one must ensure that mountingfixture/ mirror contacts are located either at the same places or else sufficiently close to the same places in both gravity-reversed configurations. It is usually not practical to locate the contacts at the same places, raising the question of how close is sufficiently close. The criterion for sufficient closeness is embodied in the St. Venant principle, which, in the present context, translates to a requirement that the distance between corresponding gravity-reversed mounting positions be small in comparison to their distances to the optical surface of the mirror. The necessity of reversing mount forces is apparent in the behavior of the equations familiar from finite element analysis (FEA) that govern deformation of the mirror.

  18. How to calculate H3 better.

    PubMed

    Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik

    2009-11-14

    Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.

  19. Fast Laplace solver approach to pore-scale permeability

    NASA Astrophysics Data System (ADS)

    Arns, C. H.; Adler, P. M.

    2018-02-01

    We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.

  20. Research on numerical simulation and protection of transient process in long-distance slurry transportation pipelines

    NASA Astrophysics Data System (ADS)

    Lan, G.; Jiang, J.; Li, D. D.; Yi, W. S.; Zhao, Z.; Nie, L. N.

    2013-12-01

    The calculation of water-hammer pressure phenomenon of single-phase liquid is already more mature for a pipeline of uniform characteristics, but less research has addressed the calculation of slurry water hammer pressure in complex pipelines with slurry flows carrying solid particles. In this paper, based on the developments of slurry pipelines at home and abroad, the fundamental principle and method of numerical simulation of transient processes are presented, and several boundary conditions are given. Through the numerical simulation and analysis of transient processes of a practical engineering of long-distance slurry transportation pipeline system, effective protection measures and operating suggestions are presented. A model for calculating the water impact of solid and fluid phases is established based on a practical engineering of long-distance slurry pipeline transportation system. After performing a numerical simulation of the transient process, analyzing and comparing the results, effective protection measures and operating advice are recommended, which has guiding significance to the design and operating management of practical engineering of longdistance slurry pipeline transportation system.

  1. Fluorescence Quenching by TEMPO: A Sub-30 Å Single-Molecule Ruler

    PubMed Central

    Zhu, Peizhi; Clamme, Jean-Pierre; Deniz, Ashok A.

    2005-01-01

    A series of DNA molecules labeled with 5-carboxytetramethylrhodamine (5-TAMRA) and the small nitroxide radical TEMPO were synthesized and tested to investigate whether the intramolecular quenching efficiency can be used to measure short intramolecular distances in small ensemble and single-molecule experiments. In combination with distance calculations using molecular mechanics modeling, the experimental results from steady-state ensemble fluorescence and fluorescence correlation spectroscopy measurements both show an exponential decrease in the quenching rate constant with the dye-quencher distance in the 10–30 Å range. The results demonstrate that TEMPO-5-TAMRA fluorescence quenching is a promising method to measure short distance changes within single biomolecules. PMID:16199509

  2. Ore Reserve Estimation of Saprolite Nickel Using Inverse Distance Method in PIT Block 3A Banggai Area Central Sulawesi

    NASA Astrophysics Data System (ADS)

    Khaidir Noor, Muhammad

    2018-03-01

    Reserve estimation is one of important work in evaluating a mining project. It is estimation of the quality and quantity of the presence of minerals have economic value. Reserve calculation method plays an important role in determining the efficiency in commercial exploration of a deposit. This study was intended to calculate ore reserves contained in the study area especially Pit Block 3A. Nickel ore reserve was estimated by using detailed exploration data, processing by using Surpac 6.2 by Inverse Distance Weight: Squared Power estimation method. Ore estimation result obtained from 30 drilling data was 76453.5 ton of Saprolite with density of 1.5 ton/m3 and COG (Cut Off Grade) Ni ≥ 1.6 %, while overburden data was 112,570.8 tons with waste rock density of 1.2 ton/m3 . Striping Ratio (SR) was 1.47 : 1 smaller than Stripping Ratio ( SR ) were set of 1.60 : 1.

  3. A method for cone fitting based on certain sampling strategy in CMM metrology

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Guo, Chaopeng

    2018-04-01

    A method of cone fitting in engineering is explored and implemented to overcome shortcomings of current fitting method. In the current method, the calculations of the initial geometric parameters are imprecise which cause poor accuracy in surface fitting. A geometric distance function of cone is constructed firstly, then certain sampling strategy is defined to calculate the initial geometric parameters, afterwards nonlinear least-squares method is used to fit the surface. The experiment is designed to verify accuracy of the method. The experiment data prove that the proposed method can get initial geometric parameters simply and efficiently, also fit the surface precisely, and provide a new accurate way to cone fitting in the coordinate measurement.

  4. Non-contact passive temperature measuring system and method of operation using micro-mechanical sensors

    DOEpatents

    Thundat, Thomas G.; Oden, Patrick I.; Datskos, Panagiotis G.

    2000-01-01

    A non-contact infrared thermometer measures target temperatures remotely without requiring the ratio of the target size to the target distance to the thermometer. A collection means collects and focusses target IR radiation on an IR detector. The detector measures thermal energy of the target over a spectrum using micromechanical sensors. A processor means calculates the collected thermal energy in at least two different spectral regions using a first algorithm in program form and further calculates the ratio of the thermal energy in the at least two different spectral regions to obtain the target temperature independent of the target size, distance to the target and emissivity using a second algorithm in program form.

  5. Study on super-resolution three-dimensional range-gated imaging technology

    NASA Astrophysics Data System (ADS)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  6. Cross-Link Guided Molecular Modeling with ROSETTA

    PubMed Central

    Leitner, Alexander; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars

    2013-01-01

    Chemical cross-links identified by mass spectrometry generate distance restraints that reveal low-resolution structural information on proteins and protein complexes. The technology to reliably generate such data has become mature and robust enough to shift the focus to the question of how these distance restraints can be best integrated into molecular modeling calculations. Here, we introduce three workflows for incorporating distance restraints generated by chemical cross-linking and mass spectrometry into ROSETTA protocols for comparative and de novo modeling and protein-protein docking. We demonstrate that the cross-link validation and visualization software Xwalk facilitates successful cross-link data integration. Besides the protocols we introduce XLdb, a database of chemical cross-links from 14 different publications with 506 intra-protein and 62 inter-protein cross-links, where each cross-link can be mapped on an experimental structure from the Protein Data Bank. Finally, we demonstrate on a protein-protein docking reference data set the impact of virtual cross-links on protein docking calculations and show that an inter-protein cross-link can reduce on average the RMSD of a docking prediction by 5.0 Å. The methods and results presented here provide guidelines for the effective integration of chemical cross-link data in molecular modeling calculations and should advance the structural analysis of particularly large and transient protein complexes via hybrid structural biology methods. PMID:24069194

  7. Parsimonious description for predicting high-dimensional dynamics

    PubMed Central

    Hirata, Yoshito; Takeuchi, Tomoya; Horai, Shunsuke; Suzuki, Hideyuki; Aihara, Kazuyuki

    2015-01-01

    When we observe a system, we often cannot observe all its variables and may have some of its limited measurements. Under such a circumstance, delay coordinates, vectors made of successive measurements, are useful to reconstruct the states of the whole system. Although the method of delay coordinates is theoretically supported for high-dimensional dynamical systems, practically there is a limitation because the calculation for higher-dimensional delay coordinates becomes more expensive. Here, we propose a parsimonious description of virtually infinite-dimensional delay coordinates by evaluating their distances with exponentially decaying weights. This description enables us to predict the future values of the measurements faster because we can reuse the calculated distances, and more accurately because the description naturally reduces the bias of the classical delay coordinates toward the stable directions. We demonstrate the proposed method with toy models of the atmosphere and real datasets related to renewable energy. PMID:26510518

  8. Correlation Between the Field Line and Particle Diffusion Coefficients in the Stochastic Fields of a Tokamak

    NASA Astrophysics Data System (ADS)

    Calvin, Mark; Punjabi, Alkesh

    1996-11-01

    We use the method of quasi-magnetic surfaces to calculate the correlation between the field line and particle diffusion coefficients. The magnetic topology of a tokamak is perturbed by a spectrum of neighboring resonant resistive modes. The Hamiltonian equations of motion for the field line are integrated numerically. Poincare plots of the quasi-magnetic surfaces are generated initially and after the field line has traversed a considerable distance. From the areas of the quasi-magnetic surfaces and the field line distance, we estimate the field line diffusion coefficient. We start plasma particles on the initial quasi-surface, and calculate the particle diffusion coefficient from our Monte Carlo method (Punjabi A., Boozer A., Lam M., Kim H. and Burke K., J. Plasma Phys.), 44, 405 (1990). We then estimate the correlation between the particle and field diffusion as the strength of the resistive modes is varied.

  9. Autofocusing in digital holography using deep learning

    NASA Astrophysics Data System (ADS)

    Ren, Zhenbo; Xu, Zhimin; Lam, Edmund Y.

    2018-02-01

    In digital holography, it is critical to know the distance in order to reconstruct the multi-sectional object. This autofocusing is traditionally solved by reconstructing a stack of in-focus and out-of-focus images and using some focus metric, such as entropy or variance, to calculate the sharpness of each reconstructed image. Then the distance corresponding to the sharpest image is determined as the focal position. This method is effective but computationally demanding and time-consuming. To get an accurate estimation, one has to reconstruct many images. Sometimes after a coarse search, a refinement is needed. To overcome this problem in autofocusing, we propose to use deep learning, i.e., a convolutional neural network (CNN), to solve this problem. Autofocusing is viewed as a classification problem, in which the true distance is transferred as a label. To estimate the distance is equated to labeling a hologram correctly. To train such an algorithm, totally 1000 holograms are captured under the same environment, i.e., exposure time, incident angle, object, except the distance. There are 5 labels corresponding to 5 distances. These data are randomly split into three datasets to train, validate and test a CNN network. Experimental results show that the trained network is capable of predicting the distance without reconstructing or knowing any physical parameters about the setup. The prediction time using this method is far less than traditional autofocusing methods.

  10. 3-D Deformation analysis via invariant geodetic obsevations.

    NASA Astrophysics Data System (ADS)

    Ardalan, A.; Esmaeili, R.

    2003-04-01

    In this paper a new method for 3-D deformation analysis based on invariant observations like distances and spatial angles is presented. Displacement field that is used in the classical deformation analysis is not reliable because the stability of the coordinate systems between successive epochs of observations cannot be guaranteed. On the contrary distances and spatial angles, i.e. measurements that are related to geometry between the constituent points of an object is independent of the definition of coordinate system. In this paper we have devised a new approach for the calculation of elements of the strain tensor directly from the geometrical observations such as angels and distances. This new method besides enjoys 3-D nature and as such guarantees the complete deformation study in 3-D space.

  11. Covariance Manipulation for Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    Use of probability of collision (Pc) has brought sophistication to CA. Made possible by JSpOC precision catalogue because provides covariance. Has essentially replaced miss distance as basic CA parameter. Embrace of Pc has elevated methods to 'manipulate' covariance to enable/improve CA calculations. Two such methods to be examined here; compensation for absent or unreliable covariances through 'Maximum Pc' calculation constructs, projection (not propagation) of epoch covariances forward in time to try to enable better risk assessments. Two questions to be answered about each; situations to which such approaches are properly applicable, amount of utility that such methods offer.

  12. Comparative Analysis of Methods of Evaluating the Lower Ionosphere Parameters by Tweek Atmospherics

    NASA Astrophysics Data System (ADS)

    Krivonos, A. P.; Shvets, A. V.

    2016-12-01

    Purpose: A comparative analysis of the phase and frequency methods for determining the Earth-ionosphere effective waveguide heights for the basic and higher types of normal waves (modes) and distance to the source of radiation - lightning - has been made by analyzing pulse signals in the ELF-VLF range - tweek-atmospherics (tweeks). Design/methodology/approach: To test the methods in computer simulations, the tweeks waveforms were synthesized for the Earth-ionosphere waveguide model with the exponential conductivity profile of the lower ionosphere. The calculations were made for a 20-40 dB signal/noise ratio. Findings: The error of the frequency method of determining the effective height of the waveguide for different waveguide modes was less than 0.5 %. The error of the phase method for determining the effective height of the waveguide was less than 0.8 %. Errors in determining the distance to the lightning was less than 1 % for the phase method, and less than 5 % for the frequency method for the source ranges 1000-3000 km. Conclusions: The analysis results have showed the accuracy of the frequency and phase methods being practically the same within distances of 1000-3000 km. For distances less than 1000 km, the phase method shows a more accurate evaluation of the range, so the combination of the two methods can be used to improve estimating the tweek’s propagation path parameters.

  13. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    PubMed

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  14. Self-similar slip distributions on irregular shaped faults

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Murphy, S.

    2018-06-01

    We propose a strategy to place a self-similar slip distribution on a complex fault surface that is represented by an unstructured mesh. This is possible by applying a strategy based on the composite source model where a hierarchical set of asperities, each with its own slip function which is dependent on the distance from the asperity centre. Central to this technique is the efficient, accurate computation of distance between two points on the fault surface. This is known as the geodetic distance problem. We propose a method to compute the distance across complex non-planar surfaces based on a corollary of the Huygens' principle. The difference between this method compared to others sample-based algorithms which precede it is the use of a curved front at a local level to calculate the distance. This technique produces a highly accurate computation of the distance as the curvature of the front is linked to the distance from the source. Our local scheme is based on a sequence of two trilaterations, producing a robust algorithm which is highly precise. We test the strategy on a planar surface in order to assess its ability to keep the self-similarity properties of a slip distribution. We also present a synthetic self-similar slip distribution on a real slab topography for a M8.5 event. This method for computing distance may be extended to the estimation of first arrival times in both complex 3D surfaces or 3D volumes.

  15. Efficient distance calculation using the spherically-extended polytope (s-tope) model

    NASA Technical Reports Server (NTRS)

    Hamlin, Gregory J.; Kelley, Robert B.; Tornero, Josep

    1991-01-01

    An object representation scheme which allows for Euclidean distance calculation is presented. The object model extends the polytope model by representing objects as the convex hull of a finite set of spheres. An algorithm for calculating distances between objects is developed which is linear in the total number of spheres specifying the two objects.

  16. Empfangsleistung in Abhängigkeit von der Zielentfernung bei optischen Kurzstrecken-Radargeräten.

    PubMed

    Riegl, J; Bernhard, M

    1974-04-01

    The dependence of the received optical power on the range in optical short-distance radar range finders is calculated by means of the methods of geometrical optics. The calculations are based on a constant intensity of the transmitter-beam cross section and on an ideal thin lens for the receiver optics. The results are confirmed by measurements. Even measurements using a nonideal thick lens system for the receiver optics are in reasonable agreement with the calculations.

  17. Detection of periodicity based on independence tests - III. Phase distance correlation periodogram

    NASA Astrophysics Data System (ADS)

    Zucker, Shay

    2018-02-01

    I present the Phase Distance Correlation (PDC) periodogram - a new periodicity metric, based on the Distance Correlation concept of Gábor Székely. For each trial period, PDC calculates the distance correlation between the data samples and their phases. PDC requires adaptation of the Székely's distance correlation to circular variables (phases). The resulting periodicity metric is best suited to sparse data sets, and it performs better than other methods for sawtooth-like periodicities. These include Cepheid and RR-Lyrae light curves, as well as radial velocity curves of eccentric spectroscopic binaries. The performance of the PDC periodogram in other contexts is almost as good as that of the Generalized Lomb-Scargle periodogram. The concept of phase distance correlation can be adapted also to astrometric data, and it has the potential to be suitable also for large evenly spaced data sets, after some algorithmic perfection.

  18. Optimal solution for travelling salesman problem using heuristic shortest path algorithm with imprecise arc length

    NASA Astrophysics Data System (ADS)

    Bakar, Sumarni Abu; Ibrahim, Milbah

    2017-08-01

    The shortest path problem is a popular problem in graph theory. It is about finding a path with minimum length between a specified pair of vertices. In any network the weight of each edge is usually represented in a form of crisp real number and subsequently the weight is used in the calculation of shortest path problem using deterministic algorithms. However, due to failure, uncertainty is always encountered in practice whereby the weight of edge of the network is uncertain and imprecise. In this paper, a modified algorithm which utilized heuristic shortest path method and fuzzy approach is proposed for solving a network with imprecise arc length. Here, interval number and triangular fuzzy number in representing arc length of the network are considered. The modified algorithm is then applied to a specific example of the Travelling Salesman Problem (TSP). Total shortest distance obtained from this algorithm is then compared with the total distance obtained from traditional nearest neighbour heuristic algorithm. The result shows that the modified algorithm can provide not only on the sequence of visited cities which shown to be similar with traditional approach but it also provides a good measurement of total shortest distance which is lesser as compared to the total shortest distance calculated using traditional approach. Hence, this research could contribute to the enrichment of methods used in solving TSP.

  19. Double peak-induced distance error in short-time-Fourier-transform-Brillouin optical time domain reflectometers event detection and the recovery method.

    PubMed

    Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi

    2015-10-01

    The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.

  20. Distance determination method of dust particles using Rosetta OSIRIS NAC and WAC data

    NASA Astrophysics Data System (ADS)

    Drolshagen, E.; Ott, T.; Koschny, D.; Güttler, C.; Tubiana, C.; Agarwal, J.; Sierks, H.; Barbieri, C.; Lamy, P. I.; Rodrigo, R.; Rickman, H.; A'Hearn, M. F.; Barucci, M. A.; Bertaux, J.-L.; Bertini, I.; Cremonese, G.; da Deppo, V.; Davidsson, B.; Debei, S.; de Cecco, M.; Deller, J.; Feller, C.; Fornasier, S.; Fulle, M.; Gicquel, A.; Groussin, O.; Gutiérrez, P. J.; Hofmann, M.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Keller, H. U.; Knollenberg, J.; Kramm, J. R.; Kührt, E.; Küppers, M.; Lara, L. M.; Lazzarin, M.; Lopez Moreno, J. J.; Marzari, F.; Naletto, G.; Oklay, N.; Shi, X.; Thomas, N.; Poppe, B.

    2017-09-01

    The ESA Rosetta spacecraft has been tracking its target, the Jupiter-family comet 67P/Churyumov-Gerasimenko, in close vicinity for over two years. It hosts the OSIRIS instruments: the Optical, Spectroscopic, and Infrared Remote Imaging System composed of two cameras, see e.g. Keller et al. (2007). In some imaging sequences dedicated to observe dust particles in the comet's coma, the two cameras took images at the same time. The aim of this work is to use these simultaneous double camera observations to calculate the dust particles' distance to the spacecraft. As the two cameras are mounted on the spacecraft with an offset of 70 cm, the distance of particles observed by both cameras can be determined by a shift of the particles' apparent trails on the images. This paper presents first results of the ongoing work, introducing the distance determination method for the OSIRIS instrument and the analysis of an example particle. We note that this method works for particles in the range of about 500-6000 m from the spacecraft.

  1. Genome-wide gene order distances support clustering the gram-positive bacteria

    PubMed Central

    House, Christopher H.; Pellegrini, Matteo; Fitz-Gibbon, Sorel T.

    2015-01-01

    Initially using 143 genomes, we developed a method for calculating the pair-wise distance between prokaryotic genomes using a Monte Carlo method to estimate the conservation of gene order. The method was based on repeatedly selecting five or six non-adjacent random orthologs from each of two genomes and determining if the chosen orthologs were in the same order. The raw distances were then corrected for gene order convergence using an adaptation of the Jukes-Cantor model, as well as using the common distance correction D′ = −ln(1-D). First, we compared the distances found via the order of six orthologs to distances found based on ortholog gene content and small subunit rRNA sequences. The Jukes-Cantor gene order distances are reasonably well correlated with the divergence of rRNA (R2 = 0.24), especially at rRNA Jukes-Cantor distances of less than 0.2 (R2 = 0.52). Gene content is only weakly correlated with rRNA divergence (R2 = 0.04) over all distances, however, it is especially strongly correlated at rRNA Jukes-Cantor distances of less than 0.1 (R2 = 0.67). This initial work suggests that gene order may be useful in conjunction with other methods to help understand the relatedness of genomes. Using the gene order distances in 143 genomes, the relations of prokaryotes were studied using neighbor joining and agreement subtrees. We then repeated our study of the relations of prokaryotes using gene order in 172 complete genomes better representing a wider-diversity of prokaryotes. Consistently, our trees show the Actinobacteria as a sister group to the bulk of the Firmicutes. In fact, the robustness of gene order support was found to be considerably greater for uniting these two phyla than for uniting any of the proteobacterial classes together. The results are supportive of the idea that Actinobacteria and Firmicutes are closely related, which in turn implies a single origin for the gram-positive cell. PMID:25653643

  2. A star recognition method based on the Adaptive Ant Colony algorithm for star sensors.

    PubMed

    Quan, Wei; Fang, Jiancheng

    2010-01-01

    A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms.

  3. Spatial interpolation of river channel topography using the shortest temporal distance

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Xian, Cuiling; Chen, Huajin; Grieneisen, Michael L.; Liu, Jiaming; Zhang, Minghua

    2016-11-01

    It is difficult to interpolate river channel topography due to complex anisotropy. As the anisotropy is often caused by river flow, especially the hydrodynamic and transport mechanisms, it is reasonable to incorporate flow velocity into topography interpolator for decreasing the effect of anisotropy. In this study, two new distance metrics defined as the time taken by water flow to travel between two locations are developed, and replace the spatial distance metric or Euclidean distance that is currently used to interpolate topography. One is a shortest temporal distance (STD) metric. The temporal distance (TD) of a path between two nodes is calculated by spatial distance divided by the tangent component of flow velocity along the path, and the STD is searched using the Dijkstra algorithm in all possible paths between two nodes. The other is a modified shortest temporal distance (MSTD) metric in which both the tangent and normal components of flow velocity were combined. They are used to construct the methods for the interpolation of river channel topography. The proposed methods are used to generate the topography of Wuhan Section of Changjiang River and compared with Universal Kriging (UK) and Inverse Distance Weighting (IDW). The results clearly showed that the STD and MSTD based on flow velocity were reliable spatial interpolators. The MSTD, followed by the STD, presents improvement in prediction accuracy relative to both UK and IDW.

  4. Calculation of rates of exciton dissociation into hot charge-transfer states in model organic photovoltaic interfaces

    NASA Astrophysics Data System (ADS)

    Vázquez, Héctor; Troisi, Alessandro

    2013-11-01

    We investigate the process of exciton dissociation in ordered and disordered model donor/acceptor systems and describe a method to calculate exciton dissociation rates. We consider a one-dimensional system with Frenkel states in the donor material and states where charge transfer has taken place between donor and acceptor. We introduce a Green's function approach to calculate the generation rates of charge-transfer states. For disorder in the Frenkel states we find a clear exponential dependence of charge dissociation rates with exciton-interface distance, with a distance decay constant β that increases linearly with the amount of disorder. Disorder in the parameters that describe (final) charge-transfer states has little effect on the rates. Exciton dissociation invariably leads to partially separated charges. In all cases final states are “hot” charge-transfer states, with electron and hole located far from the interface.

  5. Development of Methods for Diagnostics of Discharges in Supersonic Flows

    DTIC Science & Technology

    2001-09-01

    probe. As it was carried out in [I.21] the calculations of equilibrium structure of combustion products of hydrocarbonaceous fuel have shown, that at...fiber line for the required distance and the inverse transformation of the digit code to the analogue signal. New methods of plasma diagnostics are...plasma …. 137 2.3.1 Non-stationary kinetic model of a discharge in a dry air ………………………………………... 140 2.3.2 Results of numerical calculations of gas

  6. A simple method of measuring tibial tubercle to trochlear groove distance on MRI: description of a novel and reliable technique.

    PubMed

    Camp, Christopher L; Heidenreich, Mark J; Dahm, Diane L; Bond, Jeffrey R; Collins, Mark S; Krych, Aaron J

    2016-03-01

    Tibial tubercle-trochlear groove (TT-TG) distance is a variable that helps guide surgical decision-making in patients with patellar instability. The purpose of this study was to compare the accuracy and reliability of an MRI TT-TG measuring technique using a simple external alignment method to a previously validated gold standard technique that requires advanced software read by radiologists. TT-TG was calculated by MRI on 59 knees with a clinical diagnosis of patellar instability in a blinded and randomized fashion by two musculoskeletal radiologists using advanced software and by two orthopaedists using the study technique which utilizes measurements taken on a simple electronic imaging platform. Interrater reliability between the two radiologists and the two orthopaedists and intermethods reliability between the two techniques were calculated using interclass correlation coefficients (ICC) and concordance correlation coefficients (CCC). ICC and CCC values greater than 0.75 were considered to represent excellent agreement. The mean TT-TG distance was 14.7 mm (Standard Deviation (SD) 4.87 mm) and 15.4 mm (SD 5.41) as measured by the radiologists and orthopaedists, respectively. Excellent interobserver agreement was noted between the radiologists (ICC 0.941; CCC 0.941), the orthopaedists (ICC 0.978; CCC 0.976), and the two techniques (ICC 0.941; CCC 0.933). The simple TT-TG distance measurement technique analysed in this study resulted in excellent agreement and reliability as compared to the gold standard technique. This method can predictably be performed by orthopaedic surgeons without advanced radiologic software. II.

  7. How Well Can Modern Density Functionals Predict Internuclear Distances at Transition States?

    PubMed

    Xu, Xuefei; Alecu, I M; Truhlar, Donald G

    2011-06-14

    We introduce a new database called TSG48 containing 48 transition state geometrical data (in particular, internuclear distances in transition state structures) for 16 main group reactions. The 16 reactions are the 12 reactions in the previously published DBH24 database (which includes hydrogen transfer reactions, heavy-atom transfer reactions, nucleophilic substitution reactions, and association reactions plus one unimolecular isomerization) plus four H-transfer reactions in which a hydrogen atom is abstracted by the methyl or hydroperoxyl radical from the two different positions in methanol. The data in TSG48 include data for four reactions that have previously been treated at a very high level in the literature. These data are used to test and validate methods that are affordable for the entire test suite, and the most accurate of these methods is found to be the multilevel BMC-CCSD method. The data that constitute the TSG48 database are therefore taken to consist of these very high level calculations for the four reactions where they are available and BMC-CCSD calculations for the other 12 reactions. The TSG48 database is used to assess the performance of the eight Minnesota density functionals from the M05-M08 families and 26 other high-performance and popular density functionals for locating transition state geometries. For comparison, the MP2 and QCISD wave function methods have also been tested for transition state geometries. The MC3BB and MC3MPW doubly hybrid functionals and the M08-HX and M06-2X hybrid meta-GGAs are found to have the best performance of all of the density functionals tested. M08-HX is the most highly recommended functional due to the excellent performance for all five subsets of TSG48, as well as having a lower cost when compared to doubly hybrid functionals. The mean absolute errors in transition state internuclear distances associated with breaking and forming bonds as calculated by the B2PLYP, MP2, and B3LYP methods are respectively about 2, 3, and 5 times larger than those calculated by MC3BB and M08-HX.

  8. [Plasma temperature calculation and coupling mechanism analysis of laser-double wire hybrid welding].

    PubMed

    Zheng, Kai; Li, Huan; Yang, Li-Jun; Gu, Xiao-Yan; Gao, Ying

    2013-04-01

    The plasma radiation of laser-double wire hybrid welding was collected by using fiber spectrometer, the coupling mechanism of arc with laser was studied through high-speed photography during welding process, and the temperature of hybrid plasma was calculated by using the method of Boltzmann plot. The results indicated that with laser hybrid, luminance was enhanced; radiation intensity became stronger; arc was attracted to the laser point; cross section contracted and arc was more stable. The laser power, welding current and arc-arc distance are important factors that have great influence on electron temperature. Increase in the laser power, amplification of welding current and reduction of arc-arc distance can all result in the rise of temperature.

  9. Hip joint center localisation: A biomechanical application to hip arthroplasty population

    PubMed Central

    Bouffard, Vicky; Begon, Mickael; Champagne, Annick; Farhadnia, Payam; Vendittoli, Pascal-André; Lavigne, Martin; Prince, François

    2012-01-01

    AIM: To determine hip joint center (HJC) location on hip arthroplasty population comparing predictive and functional approaches with radiographic measurements. METHODS: The distance between the HJC and the mid-pelvis was calculated and compared between the three approaches. The localisation error between the predictive and functional approach was compared using the radiographic measurements as the reference. The operated leg was compared to the non-operated leg. RESULTS: A significant difference was found for the distance between the HJC and the mid-pelvis when comparing the predictive and functional method. The functional method leads to fewer errors. A statistical difference was found for the localization error between the predictive and functional method. The functional method is twice more precise. CONCLUSION: Although being more individualized, the functional method improves HJC localization and should be used in three-dimensional gait analysis. PMID:22919569

  10. Improvements in dose calculation accuracy for small off-axis targets in high dose per fraction tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding

    2012-08-15

    Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed 'Super Sampling' involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receivingmore » a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.« less

  11. Navy Nuclear Aircraft Carrier (CVN) Homeporting at Mayport: Background and Issues for Congress

    DTIC Science & Technology

    2010-05-26

    online at http://www.defenselink.mil/releases/release.aspx?releaseid= 12600. 4 Department of Defense, Quadrennial Defense Review Report, February 2010...calculated by the “How Fair Is It?” online distance calculator available at http://www.indo.com/cgi-bin/dist. 10 Although the Navy states that the CVN based...itself. 14 This is the straight-line distance between the two locations, as calculated by the “How Fair Is It?” online distance calculator available

  12. Identifying insects with incomplete DNA barcode libraries, African fruit flies (Diptera: Tephritidae) as a test case.

    PubMed

    Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc

    2012-01-01

    We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.

  13. Identifying Insects with Incomplete DNA Barcode Libraries, African Fruit Flies (Diptera: Tephritidae) as a Test Case

    PubMed Central

    Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc

    2012-01-01

    We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600

  14. Research on detection method of UAV obstruction based on binocular vision

    NASA Astrophysics Data System (ADS)

    Zhu, Xiongwei; Lei, Xusheng; Sui, Zhehao

    2018-04-01

    For the autonomous obstacle positioning and ranging in the process of UAV (unmanned aerial vehicle) flight, a system based on binocular vision is constructed. A three-stage image preprocessing method is proposed to solve the problem of the noise and brightness difference in the actual captured image. The distance of the nearest obstacle is calculated by using the disparity map that generated by binocular vision. Then the contour of the obstacle is extracted by post-processing of the disparity map, and a color-based adaptive parameter adjustment algorithm is designed to extract contours of obstacle automatically. Finally, the safety distance measurement and obstacle positioning during the UAV flight process are achieved. Based on a series of tests, the error of distance measurement can keep within 2.24% of the measuring range from 5 m to 20 m.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, A

    Purpose: Accuboost treatment planning uses dwell times from a nomogram designed with Monte Carlo calculations for round and D-shaped applicators. A quick dose calculation method has been developed for verification of the HDR Brachytherapy dose as a second check. Methods: Accuboost breast treatment uses several round and D-shaped applicators to be used non-invasively with an Ir-192 source from a HDR Brachytherapy afterloader after the breast is compressed in a mammographic unit for localization. The breast thickness, source activity, the prescription dose and the applicator size are entered into a nomogram spreadsheet which gives the dwell times to be manually enteredmore » into the delivery computer. Approximating the HDR Ir-192 as a point source, and knowing the geometry of the round and D-applicators, the distances from the source positions to the midpoint of the central plane are calculated. Using the exposure constant of Ir-192 and medium as human tissue, the dose at a point is calculated as: D(cGy) = 1.254 × A × t/R2, where A is the activity in Ci, t is the dwell time in sec and R is the distance in cm. The dose from each dwell position is added to get the total dose. Results: Each fraction is delivered in two compressions: cranio-caudally and medial-laterally. A typical APBI treatment in 10 fractions requires 20 compressions. For a patient treated with D45 applicators and an average of 5.22 cm thickness, this calculation was 1.63 % higher than the prescription. For another patient using D53 applicators in the CC direction and 7 cm SDO applicators in the ML direction, this calculation was 1.31 % lower than the prescription. Conclusion: This is a simple and quick method to double check the dose on the central plane for Accuboost treatment.« less

  16. Calculation of two dimensional vortex/surface interference using panel methods

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1980-01-01

    The application of panel methods to the calculation of vortex/surface interference characteristics in two dimensional flow was studied over a range of situations starting with the simple case of a vortex above a plane and proceeding to the case of vortex separation from a prescribed point on a thick section. Low order and high order panel methods were examined, but the main factor influencing the accuracy of the solution was the distance between control stations in relation to the height of the vortex above the surface. Improvements over the basic solutions were demonstrated using a technique based on subpanels and an applied doublet distribution.

  17. Predicting the helix packing of globular proteins by self-correcting distance geometry.

    PubMed

    Mumenthaler, C; Braun, W

    1995-05-01

    A new self-correcting distance geometry method for predicting the three-dimensional structure of small globular proteins was assessed with a test set of 8 helical proteins. With the knowledge of the amino acid sequence and the helical segments, our completely automated method calculated the correct backbone topology of six proteins. The accuracy of the predicted structures ranged from 2.3 A to 3.1 A for the helical segments compared to the experimentally determined structures. For two proteins, the predicted constraints were not restrictive enough to yield a conclusive prediction. The method can be applied to all small globular proteins, provided the secondary structure is known from NMR analysis or can be predicted with high reliability.

  18. Invalid-point removal based on epipolar constraint in the structured-light method

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-06-01

    In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.

  19. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  20. Testing electronic structure methods for describing intermolecular H...H interactions in supramolecular chemistry.

    PubMed

    Casadesús, Ricard; Moreno, Miquel; González-Lafont, Angels; Lluch, José M; Repasky, Matthew P

    2004-01-15

    In this article a wide variety of computational approaches (molecular mechanics force fields, semiempirical formalisms, and hybrid methods, namely ONIOM calculations) have been used to calculate the energy and geometry of the supramolecular system 2-(2'-hydroxyphenyl)-4-methyloxazole (HPMO) encapsulated in beta-cyclodextrin (beta-CD). The main objective of the present study has been to examine the performance of these computational methods when describing the short range H. H intermolecular interactions between guest (HPMO) and host (beta-CD) molecules. The analyzed molecular mechanics methods do not provide unphysical short H...H contacts, but it is obvious that their applicability to the study of supramolecular systems is rather limited. For the semiempirical methods, MNDO is found to generate more reliable geometries than AM1, PM3 and the two recently developed schemes PDDG/MNDO and PDDG/PM3. MNDO results only give one slightly short H...H distance, whereas the NDDO formalisms with modifications of the Core Repulsion Function (CRF) via Gaussians exhibit a large number of short to very short and unphysical H...H intermolecular distances. In contrast, the PM5 method, which is the successor to PM3, gives very promising results. Our ONIOM calculations indicate that the unphysical optimized geometries from PM3 are retained when this semiempirical method is used as the low level layer in a QM:QM formulation. On the other hand, ab initio methods involving good enough basis sets, at least for the high level layer in a hybrid ONIOM calculation, behave well, but they may be too expensive in practice for most supramolecular chemistry applications. Finally, the performance of the evaluated computational methods has also been tested by evaluating the energetic difference between the two most stable conformations of the host(beta-CD)-guest(HPMO) system. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 25: 99-105, 2004

  1. SU-F-T-142: An Analytical Model to Correct the Aperture Scattered Dose in Clinical Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, B; Liu, S; Zhang, T

    2016-06-15

    Purpose: Apertures or collimators are used to laterally shape proton beams in double scattering (DS) delivery and to sharpen the penumbra in pencil beam (PB) delivery. However, aperture-scattered dose is not included in the current dose calculations of treatment planning system (TPS). The purpose of this study is to provide a method to correct the aperture-scattered dose based on an analytical model. Methods: A DS beam with a non-divergent aperture was delivered using a single-room proton machine. Dose profiles were measured with an ion-chamber scanning in water and a 2-D ion chamber matrix with solid-water buildup at various depths. Themore » measured doses were considered as the sum of the non-contaminated dose and the aperture-scattered dose. The non-contaminated dose was calculated by TPS and subtracted from the measured dose. Aperture scattered-dose was modeled as a 1D Gaussian distribution. For 2-D fields, to calculate the scatter-dose from all the edges of aperture, a sum of weighted distance was used in the model based on the distance from calculation point to aperture edge. The gamma index was calculated between the measured and calculated dose with and without scatter correction. Results: For a beam with range of 23 cm and aperture size of 20 cm, the contribution of the scatter horn was ∼8% of the total dose at 4 cm depth and diminished to 0 at 15 cm depth. The amplitude of scatter-dose decreased linearly with the depth increase. The 1D gamma index (2%/2 mm) between the calculated and measured profiles increased from 63% to 98% for 4 cm depth and from 83% to 98% at 13 cm depth. The 2D gamma index (2%/2 mm) at 4 cm depth has improved from 78% to 94%. Conclusion: Using the simple analytical method the discrepancy between the measured and calculated dose has significantly improved.« less

  2. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.

    PubMed

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan

    2016-03-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.

  3. Skyshine radiation from a pressurized water reactor containment dome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, W.H.

    1986-06-01

    The radiation dose rates resulting from airborne activities inside a postaccident pressurized water reactor containment are calculated by a discrete ordinates/Monte Carlo combined method. The calculated total dose rates and the skyshine component are presented as a function of distance from the containment at three different elevations for various gamma-ray source energies. The one-dimensional (ANISN code) is used to approximate the skyshine dose rates from the hemisphere dome, and the results are compared favorably to more rigorous results calculated by a three-dimensional Monte Carlo code.

  4. Simulating the x-ray image contrast to setup techniques with desired flaw detectability

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2015-04-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  5. MaRaCluster: A Fragment Rarity Metric for Clustering Fragment Spectra in Shotgun Proteomics.

    PubMed

    The, Matthew; Käll, Lukas

    2016-03-04

    Shotgun proteomics experiments generate large amounts of fragment spectra as primary data, normally with high redundancy between and within experiments. Here, we have devised a clustering technique to identify fragment spectra stemming from the same species of peptide. This is a powerful alternative method to traditional search engines for analyzing spectra, specifically useful for larger scale mass spectrometry studies. As an aid in this process, we propose a distance calculation relying on the rarity of experimental fragment peaks, following the intuition that peaks shared by only a few spectra offer more evidence than peaks shared by a large number of spectra. We used this distance calculation and a complete-linkage scheme to cluster data from a recent large-scale mass spectrometry-based study. The clusterings produced by our method have up to 40% more identified peptides for their consensus spectra compared to those produced by the previous state-of-the-art method. We see that our method would advance the construction of spectral libraries as well as serve as a tool for mining large sets of fragment spectra. The source code and Ubuntu binary packages are available at https://github.com/statisticalbiotechnology/maracluster (under an Apache 2.0 license).

  6. Monte Carlo dose calculations of beta-emitting sources for intravascular brachytherapy: a comparison between EGS4, EGSnrc, and MCNP.

    PubMed

    Wang, R; Li, X A

    2001-02-01

    The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.

  7. Scale-Up of Lubricant Mixing Process by Using V-Type Blender Based on Discrete Element Method.

    PubMed

    Horibe, Masashi; Sonoda, Ryoichi; Watano, Satoru

    2018-01-01

    A method for scale-up of a lubricant mixing process in a V-type blender was proposed. Magnesium stearate was used for the lubricant, and the lubricant mixing experiment was conducted using three scales of V-type blenders (1.45, 21 and 130 L) under the same fill level and Froude (Fr) number. However, the properties of lubricated mixtures and tablets could not correspond with the mixing time or the total revolution number. To find the optimum scale-up factor, discrete element method (DEM) simulations of three scales of V-type blender mixing were conducted, and the total travel distance of particles under the different scales was calculated. The properties of the lubricated mixture and tablets obtained from the scale-up experiment were well correlated with the mixing time determined by the total travel distance. It was found that a scale-up simulation based on the travel distance of particles is valid for the lubricant mixing scale-up processes.

  8. Spatial analysis of groundwater levels using Fuzzy Logic and geostatistical tools

    NASA Astrophysics Data System (ADS)

    Theodoridou, P. G.; Varouchakis, E. A.; Karatzas, G. P.

    2017-12-01

    The spatial variability evaluation of the water table of an aquifer provides useful information in water resources management plans. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram is very important for the optimal method performance. This work compares three different criteria to assess the theoretical variogram that fits to the experimental one: the Least Squares Sum method, the Akaike Information Criterion and the Cressie's Indicator. Moreover, variable distance metrics such as the Euclidean, Minkowski, Manhattan, Canberra and Bray-Curtis are applied to calculate the distance between the observation and the prediction points, that affects both the variogram calculation and the Kriging estimator. A Fuzzy Logic System is then applied to define the appropriate neighbors for each estimation point used in the Kriging algorithm. The two criteria used during the Fuzzy Logic process are the distance between observation and estimation points and the groundwater level value at each observation point. The proposed techniques are applied to a data set of 250 hydraulic head measurements distributed over an alluvial aquifer. The analysis showed that the Power-law variogram model and Manhattan distance metric within ordinary kriging provide the best results when the comprehensive geostatistical analysis process is applied. On the other hand, the Fuzzy Logic approach leads to a Gaussian variogram model and significantly improves the estimation performance. The two different variogram models can be explained in terms of a fractional Brownian motion approach and of aquifer behavior at local scale. Finally, maps of hydraulic head spatial variability and of predictions uncertainty are constructed for the area with the two different approaches comparing their advantages and drawbacks.

  9. The structure of clusters of galaxies

    NASA Astrophysics Data System (ADS)

    Fox, David Charles

    When infalling gas is accreted onto a cluster of galaxies, its kinetic energy is converted to thermal energy in a shock, heating the ions. Using a self-similar spherical model, we calculate the collisional heating of the electrons by the ions, and predict the electron and ion temperature profiles. While there are significant differences between the two, they occur at radii larger than currently observable, and too large to explain observed X-ray temperature declines in clusters. Numerical simulations by Navarro, Frenk, & White (1996) predict a universal dark matter density profile. We calculate the expected number of multiply-imaged background galaxies in the Hubble Deep Field due to foreground groups and clusters with this profile. Such groups are up to 1000 times less efficient at lensing than the standard singular isothermal spheres. However, with either profile, the expected number of galaxies lensed by groups in the Hubble Deep Field is at most one, consistent with the lack of clearly identified group lenses. X-ray and Sunyaev-Zel'dovich (SZ) effect observations can be combined to determine the distance to clusters of galaxies, provided the clusters are spherical. When applied to an aspherical cluster, this method gives an incorrect distance. We demonstrate a method for inferring the three-dimensional shape of a cluster and its correct distance from X-ray, SZ effect, and weak gravitational lensing observations, under the assumption of hydrostatic equilibrium. We apply this method to simple, analytic models of clusters, and to a numerically simulated cluster. Using artificial observations based on current X-ray and SZ effect instruments, we recover the true distance without detectable bias and with uncertainties of 4 percent.

  10. Effects of burstiness on the air transportation system

    NASA Astrophysics Data System (ADS)

    Ito, Hidetaka; Nishinari, Katsuhiro

    2017-01-01

    The effects of burstiness in complex networks have received considerable attention. In particular, the effects on temporal distance and delays in the air transportation system are significant owing to their huge impact on our society. Therefore, in this paper, the temporal distance of empirical U.S. flight schedule data is compared with that of regularized data without burstiness to analyze the effects of burstiness. The temporal distance is calculated by a graph analysis method considering flight delays, missed connections, flight cancellations, and congestion. In addition, we propose two temporal distance indexes based on passengers' behavior to quantify the effects. As a result, we find that burstiness reduces both the scheduled and the actual temporal distances for business travelers, while delays caused by missed connections and congestion are increased. We also find that the decrease of the scheduled temporal distance by burstiness is offset by an increase of the delays for leisure passengers. Moreover, we discover that the positive effect of burstiness is lost when flight schedules are overcrowded.

  11. Effects of burstiness on the air transportation system.

    PubMed

    Ito, Hidetaka; Nishinari, Katsuhiro

    2017-01-01

    The effects of burstiness in complex networks have received considerable attention. In particular, the effects on temporal distance and delays in the air transportation system are significant owing to their huge impact on our society. Therefore, in this paper, the temporal distance of empirical U.S. flight schedule data is compared with that of regularized data without burstiness to analyze the effects of burstiness. The temporal distance is calculated by a graph analysis method considering flight delays, missed connections, flight cancellations, and congestion. In addition, we propose two temporal distance indexes based on passengers' behavior to quantify the effects. As a result, we find that burstiness reduces both the scheduled and the actual temporal distances for business travelers, while delays caused by missed connections and congestion are increased. We also find that the decrease of the scheduled temporal distance by burstiness is offset by an increase of the delays for leisure passengers. Moreover, we discover that the positive effect of burstiness is lost when flight schedules are overcrowded.

  12. Heterogeneity Measurement Based on Distance Measure for Polarimetric SAR Data

    NASA Astrophysics Data System (ADS)

    Xing, Xiaoli; Chen, Qihao; Liu, Xiuguo

    2018-04-01

    To effectively test the scene heterogeneity for polarimetric synthetic aperture radar (PolSAR) data, in this paper, the distance measure is introduced by utilizing the similarity between the sample and pixels. Moreover, given the influence of the distribution and modeling texture, the K distance measure is deduced according to the Wishart distance measure. Specifically, the average of the pixels in the local window replaces the class center coherency or covariance matrix. The Wishart and K distance measure are calculated between the average matrix and the pixels. Then, the ratio of the standard deviation to the mean is established for the Wishart and K distance measure, and the two features are defined and applied to reflect the complexity of the scene. The proposed heterogeneity measure is proceeded by integrating the two features using the Pauli basis. The experiments conducted on the single-look and multilook PolSAR data demonstrate the effectiveness of the proposed method for the detection of the scene heterogeneity.

  13. Analysis and machine mapping of the distribution of band recoveries

    USGS Publications Warehouse

    Cowardin, L.M.

    1977-01-01

    A method of calculating distance and bearing from banding site to recovery location based on the solution of a spherical triangle is presented. X and Y distances on an ordinate grid were applied to computer plotting of recoveries on a map. The advantages and disadvantages of tables of recoveries by State or degree block, axial lines, and distance of recovery from banding site for presentation and comparison of the spatial distribution of band recoveries are discussed. A special web-shaped partition formed by concentric circles about the point of banding and great circles at 30-degree intervals through the point of banding has certain advantages over other methods. Comparison of distributions by means of a X? contingency test is illustrated. The statistic V = X?/N can be used as a measure of difference between two distributions of band recoveries and its possible use is illustrated as a measure of the degree of migrational homing.

  14. How accurately can the peak skin dose in fluoroscopy be determined using indirect dose metrics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Ensor, Joe E.; Pasciak, Alexander S.

    Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that result in skin reactions can be reached during these procedures. There is no consensus as to whether or not indirect skin dosimetry is sufficiently accurate for fluoroscopically-guided interventions. However, measuring PSD with film is difficult and the decision to do so must be madea priori. The purpose of this study was to assess the accuracy of different types of indirect dose estimates and to determine if PSD can be calculated within ±50% using indirect dose metrics for embolization procedures. Methods: PSD were measured directly using radiochromicmore » film for 41 consecutive embolization procedures at two sites. Indirect dose metrics from the procedures were collected, including reference air kerma. Four different estimates of PSD were calculated from the indirect dose metrics and compared along with reference air kerma to the measured PSD for each case. The four indirect estimates included a standard calculation method, the use of detailed information from the radiation dose structured report, and two simplified calculation methods based on the standard method. Indirect dosimetry results were compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the different indirect estimates were examined. Results: When using the standard calculation method, calculated PSD were within ±35% for all 41 procedures studied. Calculated PSD were within ±50% for a simplified method using a single source-to-patient distance for all calculations. Reference air kerma was within ±50% for all but one procedure. Cases for which reference air kerma or calculated PSD exhibited large (±35%) differences from the measured PSD were analyzed, and two main causative factors were identified: unusually small or large source-to-patient distances and large contributions to reference air kerma from cone beam computed tomography or acquisition runs acquired at large primary gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±35% for embolization procedures. Reference air kerma can be used without modification to set notification limits and substantial radiation dose levels, provided the displayed reference air kerma is accurate. These results can reasonably be extended to similar procedures, including vascular and interventional oncology. Considering these results, film dosimetry is likely an unnecessary effort for these types of procedures when indirect dose metrics are available.« less

  15. Influence of the effective lens position, as predicted by axial length and keratometry, on the near add power of multifocal intraocular lenses.

    PubMed

    Savini, Giacomo; Hoffer, Kenneth J; Lombardo, Marco; Serrao, Sebastiano; Schiano-Lomoriello, Domenico; Ducoli, Pietro

    2016-01-01

    To calculate the near focal distance of different multifocal intraocular lenses (IOLs) as a function of the 2 parameters that are measured before cataract surgery; that is, axial length (AL) and refractive corneal power (keratometry [K]). GB Bietti Foundation IRCCS, Rome, Italy. Noninterventional theoretical study. The IOL power for emmetropia was first calculated in an eye model with the AL ranging from 20 to 30 mm and K from 38 to 48 diopters (D). Then, the predicted myopic refraction for any given IOL add power (from +1.5 to +4.0 D) was calculated, and from this value the near focal distance was obtained. Calculations were also performed for the average eye (K = 43.81 D; AL = 23.65 mm). The near focal distance increased with increasing values of K and AL for each near power add. The near focal distance ranged between 53 cm and 72 cm (21 inches and 28 inches) for a multifocal IOL with +2.50 D, between 44 cm and 60 cm (17 inches and 24 inches) for a multifocal IOL with +3.00 D add, and between 33 cm and 44 cm (13 inches and 18 inches) for a multifocal IOL with +4.00 D add. In the average eye, the near focal distance ranges between 36 cm (near add power = 4.00 D) and 99 cm (near add power = 1.5 D). Longer eyes with steeper corneas showed the longest near focal distance and could experience more difficulties in focusing near objects after surgery. The opposite was true for short hyperopic eyes. Dr. Hoffer receives licensing fees for the commercial use of the registered trademark Hoffer from all biometry manufacturers using the Hoffer Q formula to ensure that it is programmed correctly and book royalties from Slack, Inc., for the textbook IOL Power. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  16. Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas

    NASA Astrophysics Data System (ADS)

    Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.

    2017-12-01

    Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.

  17. Time delay and distance measurement

    NASA Technical Reports Server (NTRS)

    Abshire, James B. (Inventor); Sun, Xiaoli (Inventor)

    2011-01-01

    A method for measuring time delay and distance may include providing an electromagnetic radiation carrier frequency and modulating one or more of amplitude, phase, frequency, polarization, and pointing angle of the carrier frequency with a return to zero (RZ) pseudo random noise (PN) code. The RZ PN code may have a constant bit period and a pulse duration that is less than the bit period. A receiver may detect the electromagnetic radiation and calculate the scattering profile versus time (or range) by computing a cross correlation function between the recorded received signal and a three-state RZ PN code kernel in the receiver. The method also may be used for pulse delay time (i.e., PPM) communications.

  18. Measuring Thermal Diffusivity Of A High-Tc Superconductor

    NASA Technical Reports Server (NTRS)

    Powers, Charles E.; Oh, Gloria; Leidecker, Henning

    1992-01-01

    Technique for measuring thermal diffusivity of superconductor of high critical temperature based on Angstrom's temperature-wave method. Peltier junction generates temperature oscillations, which propagate with attenuation up specimen. Thermal diffusivity of specimen calculated from distance between thermocouples and amplitudes and phases of oscillatory components of thermocouple readings.

  19. Simulation of Reversible Protein–Protein Binding and Calculation of Binding Free Energies Using Perturbed Distance Restraints

    PubMed Central

    2017-01-01

    Virtually all biological processes depend on the interaction between proteins at some point. The correct prediction of biomolecular binding free-energies has many interesting applications in both basic and applied pharmaceutical research. While recent advances in the field of molecular dynamics (MD) simulations have proven the feasibility of the calculation of protein–protein binding free energies, the large conformational freedom of proteins and complex free energy landscapes of binding processes make such calculations a difficult task. Moreover, convergence and reversibility of resulting free-energy values remain poorly described. In this work, an easy-to-use, yet robust approach for the calculation of standard-state protein–protein binding free energies using perturbed distance restraints is described. In the binding process the conformations of the proteins were restrained, as suggested earlier. Two approaches to avoid end-state problems upon release of the conformational restraints were compared. The method was evaluated by practical application to a small model complex of ubiquitin and the very flexible ubiquitin-binding domain of human DNA polymerase ι (UBM2). All computed free energy differences were closely monitored for convergence, and the calculated binding free energies had a mean unsigned deviation of only 1.4 or 2.5 kJ·mol–1 from experimental values. Statistical error estimates were in the order of thermal noise. We conclude that the presented method has promising potential for broad applicability to quantitatively describe protein–protein and various other kinds of complex formation. PMID:28898077

  20. The importance of including local correlation times in the calculation of inter-proton distances from NMR measurements: ignoring local correlation times leads to significant errors in the conformational analysis of the Glc alpha1-2Glc alpha linkage by NMR spectroscopy.

    PubMed

    Mackeen, Mukram; Almond, Andrew; Cumpstey, Ian; Enis, Seth C; Kupce, Eriks; Butters, Terry D; Fairbanks, Antony J; Dwek, Raymond A; Wormald, Mark R

    2006-06-07

    The experimental determination of oligosaccharide conformations has traditionally used cross-linkage 1H-1H NOE/ROEs. As relatively few NOEs are observed, to provide sufficient conformational constraints this method relies on: accurate quantification of NOE intensities (positive constraints); analysis of absent NOEs (negative constraints); and hence calculation of inter-proton distances using the two-spin approximation. We have compared the results obtained by using 1H 2D NOESY, ROESY and T-ROESY experiments at 500 and 700 MHz to determine the conformation of the terminal Glc alpha1-2Glc alpha linkage in a dodecasaccharide and a related tetrasaccharide. For the tetrasaccharide, the NOESY and ROESY spectra produced the same qualitative pattern of linkage cross-peaks but the quantitative pattern, the relative peak intensities, was different. For the dodecasaccharide, the NOESY and ROESY spectra at 500 MHz produced a different qualitative pattern of linkage cross-peaks, with fewer peaks in the NOESY spectrum. At 700 MHz, the NOESY and ROESY spectra of the dodecasaccharide produced the same qualitative pattern of peaks, but again the relative peak intensities were different. These differences are due to very significant differences in the local correlation times for different proton pairs across this glycosidic linkage. The local correlation time for each proton pair was measured using the ratio of the NOESY and T-ROESY cross-relaxation rates, leaving the NOESY and ROESY as independent data sets for calculating the inter-proton distances. The inter-proton distances calculated including the effects of differences in local correlation times give much more consistent results.

  1. Research on spatial economic structure for different economic sectors from a perspective of a complex network

    NASA Astrophysics Data System (ADS)

    Hu, Sen; Yang, Hualei; Cai, Boliang; Yang, Chunxia

    2013-09-01

    The economy system is a complex system, and the complex network is a powerful tool to study its complexity. Here we calculate the economic distance matrices based on annual GDP of nine economic sectors from 1995-2010 in 31 Chinese provinces and autonomous regions,1 then build several spatial economic networks through the threshold method and the Minimal Spanning Tree method. After the analysis on the structure of the networks and the influence of geographic distance, some conclusions are drawn. First, connectivity distribution of a spatial economic network does not follow the power law. Second, according to the network structure, nine economic sectors could be divided into two groups, and there is significant discrepancy of network structure between these two groups. Moreover, the influence of the geographic distance plays an important role on the structure of a spatial economic network, network parameters are changed with the influence of the geographic distance. At last, 2000 km is the critical value for geographic distance: for real estate and finance, the spearman’s rho with l<2000 is bigger than that with l>2000, and the case is opposite for other economic sectors.

  2. Recent skyshine calculations at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degtyarenko, P.

    1997-12-01

    New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less

  3. Navy Nuclear Aircraft Carrier (CVN) Homeporting at Mayport: Background and Issues for Congress

    DTIC Science & Technology

    2010-04-01

    Aircraft Carrier Homeporting In Mayport,” available online at http://www.defenselink.mil/releases/release.aspx?releaseid= 12600. 4 Department of Defense...miles is the straight-line distance between the two locations, as calculated by the “How Fair Is It?” online distance calculator available at http...Fair Is It?” online distance calculator available at http://www.indo.com/cgi-bin/dist. 13 Department of the Navy, Report on Strategic Plan for

  4. Automated Bone Segmentation and Surface Evaluation of a Small Animal Model of Post-Traumatic Osteoarthritis.

    PubMed

    Ramme, Austin J; Voss, Kevin; Lesporis, Jurinus; Lendhey, Matin S; Coughlin, Thomas R; Strauss, Eric J; Kennedy, Oran D

    2017-05-01

    MicroCT imaging allows for noninvasive microstructural evaluation of mineralized bone tissue, and is essential in studies of small animal models of bone and joint diseases. Automatic segmentation and evaluation of articular surfaces is challenging. Here, we present a novel method to create knee joint surface models, for the evaluation of PTOA-related joint changes in the rat using an atlas-based diffeomorphic registration to automatically isolate bone from surrounding tissues. As validation, two independent raters manually segment datasets and the resulting segmentations were compared to our novel automatic segmentation process. Data were evaluated using label map volumes, overlap metrics, Euclidean distance mapping, and a time trial. Intraclass correlation coefficients were calculated to compare methods, and were greater than 0.90. Total overlap, union overlap, and mean overlap were calculated to compare the automatic and manual methods and ranged from 0.85 to 0.99. A Euclidean distance comparison was also performed and showed no measurable difference between manual and automatic segmentations. Furthermore, our new method was 18 times faster than manual segmentation. Overall, this study describes a reliable, accurate, and automatic segmentation method for mineralized knee structures from microCT images, and will allow for efficient assessment of bony changes in small animal models of PTOA.

  5. Critical current density measurement of striated multifilament-coated conductors using a scanning Hall probe microscope

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Fen; Kochat, Mehdi; Majkic, Goran; Selvamanickam, Venkat

    2016-08-01

    In this paper the authors succeeded in measuring the critical current density ({J}{{c}}) of multifilament-coated conductors (CCs) with thin filaments as low as 0.25 mm using the scanning hall probe microscope (SHPM) technique. A new iterative method of data analysis is developed to make the calculation of {J}{{c}} for thin filaments possible, even without a very small scan distance. The authors also discussed in detail the advantage and limitation of the iterative method using both simulation and experiment results. The results of the new method correspond well with the traditional fast Fourier transform method where this is still applicable. However, the new method is applicable for the filamentized CCs in much wider measurement conditions such as with thin filament and a large scan distance, thus overcoming the barrier for application of the SHPM technique on {J}{{c}} measurement of long filamentized CCs with narrow filaments.

  6. Distance dependence in photo-induced intramolecular electron transfer

    NASA Astrophysics Data System (ADS)

    Larsson, Sven; Volosov, Andrey

    1986-09-01

    The distance dependence of the rate of photo-induced electron transfer reactions is studied. A quantum mechanical method CNDO/S is applied to a series of molecules recently investigated by Hush et al. experimentally. The calculations show a large interaction through the saturated bridge which connects the two chromophores. The electronic matrix element HAB decreases a factor 10 in about 4 Å. There is also a decrease of the rate due to less exothermicity for the longer molecule. The results are in fair agreement with the experimental results.

  7. Quantitative Structure Retention Relationships of Polychlorinated Dibenzodioxins and Dibenzofurans

    DTIC Science & Technology

    1991-08-01

    be a projection onto the X-Y plane. The algorithm for this calculation can be found in Stouch and Jurs (22), but was further refined by Rohrbaugh and...throughspace distances. WPSA2 (c) Weighted positive charged surface area. MOMH2 (c) Second major moment of inertia with hydrogens attached. CSTR 3 (d) Sum...of the models. The robust regression analysis method calculates a regression model using a least median squares algorithm which is not as susceptible

  8. Use of a 3-D Dispersion Model for Calculation of Distribution of Horse Allergen and Odor around Horse Facilities

    PubMed Central

    Haeger-Eugensson, Marie; Ferm, Martin; Elfman, Lena

    2014-01-01

    The interest in equestrian sports has increased substantially during the last decades, resulting in increased number of horse facilities around urban areas. In Sweden, new guidelines for safe distance have been decided based on the size of the horse facility (e.g., number of horses) and local conditions, such as topography and meteorology. There is therefore an increasing need to estimate dispersion of horse allergens to be used, for example, in the planning processes for new residential areas in the vicinity of horse facilities. The aim of this study was to develop a method for calculating short- and long-term emissions and dispersion of horse allergen and odor around horse facilities. First, a method was developed to estimate horse allergen and odor emissions at hourly resolution based on field measurements. Secondly, these emission factors were used to calculate concentrations of horse allergen and odor by using 3-D dispersion modeling. Results from these calculations showed that horse allergens spread up to about 200 m, after which concentration levels were very low (<2 U/m3). Approximately 10% of a study-group detected the smell of manure at 60m, while the majority—80%–90%—detected smell at 60 m or shorter distance from the manure heap. Modeling enabled horse allergen exposure concentrations to be determined with good time resolution. PMID:24690946

  9. Energy levels of a hydrogenic impurity in a parabolic quantum well with a magnetic field

    NASA Astrophysics Data System (ADS)

    Zang, J. X.; Rustgi, M. L.

    1993-07-01

    In this paper, we present a calculation of the energy levels of a hydrogenic impurity (or a hydrogenic atom) at the bottom of a one-dimensional parabolic quantum well with a magnetic field normal to the plane of the well. The finite-basis-set variational method is used to calculate the ground state and the excited states with major quantum number less than or equal to 3. The limit of small radial distance and the limit of great radial distance are considered to choose a set of proper basis functions. The results in the limit that the parabolic parameter α=0 are compared with the data of Rösner et al. [J. Phys. B 17, 29 (1984)]. The comparison shows that the present calculation is quite accurate. It is found that the energy levels increase with increasing parabolic parameter α and increase with increasing normalized magnetic-field strength γ except those levels with magnetic quantum number m<0 at small γ.

  10. Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image

    NASA Astrophysics Data System (ADS)

    Demir, N.; Kaynarca, M.; Oy, S.

    2016-06-01

    Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.

  11. Automatic 3D liver location and segmentation via convolutional neural network and graph cut.

    PubMed

    Lu, Fang; Wu, Fa; Hu, Peijun; Peng, Zhiyi; Kong, Dexing

    2017-02-01

    Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans. The proposed method consists of two main steps: (i) simultaneously liver detection and probabilistic segmentation using 3D convolutional neural network; (ii) accuracy refinement of the initial segmentation with graph cut and the previously learned probability map. The proposed approach was validated on forty CT volumes taken from two public databases MICCAI-Sliver07 and 3Dircadb1. For the MICCAI-Sliver07 test dataset, the calculated mean ratios of volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root-mean-square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSD) are 5.9, 2.7 %, 0.91, 1.88 and 18.94 mm, respectively. For the 3Dircadb1 dataset, the calculated mean ratios of VOE, RVD, ASD, RMSD and MSD are 9.36, 0.97 %, 1.89, 4.15 and 33.14 mm, respectively. The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.

  12. Spatial Interpolation of Fine Particulate Matter Concentrations Using the Shortest Wind-Field Path Distance

    PubMed Central

    Li, Longxiang; Gong, Jianhua; Zhou, Jieping

    2014-01-01

    Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health. PMID:24798197

  13. Spatial interpolation of fine particulate matter concentrations using the shortest wind-field path distance.

    PubMed

    Li, Longxiang; Gong, Jianhua; Zhou, Jieping

    2014-01-01

    Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.

  14. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  15. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  16. Using a Simple Optical Rangefinder To Teach Similar Triangles.

    ERIC Educational Resources Information Center

    Cuicchi, Paul M.; Hutchison, Paul S.

    2003-01-01

    Describes how the concept of similar triangles was taught using an optical method of estimating large distances as a corresponding activity. Includes the derivation of a formula to calculate one source of measurement error and is a nice exercise in the use of the properties of similar triangles. (Author/NB)

  17. Navy Nuclear Aircraft Carrier (CVN) Homeporting at Mayport: Background and Issues for Congress

    DTIC Science & Technology

    2010-06-10

    Mayport,” available online at http://www.defenselink.mil/releases/release.aspx?releaseid= 12600. 4 Department of Defense, Quadrennial Defense Review...locations, as calculated by the “How Fair Is It?” online distance calculator available at http://www.indo.com/cgi-bin/dist. 10 Although the Navy states...portion of Norfolk itself. 14 This is the straight-line distance between the two locations, as calculated by the “How Fair Is It?” online distance

  18. Navy Nuclear Aircraft Carrier (CVN) Homeporting at Mayport: Background and Issues for Congress

    DTIC Science & Technology

    2010-09-29

    Review To Determine Aircraft Carrier Homeporting In Mayport,” available online at http://www.defenselink.mil/releases/release.aspx?releaseid= 12600...Ocean. The figure of about 32 nautical miles is the straight-line distance between the two locations, as calculated by the “How Fair Is It?” online ...distance between the two locations, as calculated by the “How Fair Is It?” online distance calculator available at http://www.indo.com/cgi-bin/dist

  19. Navy Nuclear Aircraft Carrier (CVN) Homeporting at Mayport: Background and Issues for Congress

    DTIC Science & Technology

    2010-04-23

    Release No. 233-09 of April 10, 2009, entitled “Quadrennial Defense Review To Determine Aircraft Carrier Homeporting In Mayport,” available online at...The figure of about 32 nautical miles is the straight-line distance between the two locations, as calculated by the “How Fair Is It?” online ...distance between the two locations, as calculated by the “How Fair Is It?” online distance calculator available at http://www.indo.com/cgi-bin/dist

  20. Electronic Coupling Calculations for Bridge-Mediated Charge Transfer Using Constrained Density Functional Theory (CDFT) and Effective Hamiltonian Approaches at the Density Functional Theory (DFT) and Fragment-Orbital Density Functional Tight Binding (FODFTB) Level

    DOE PAGES

    Gillet, Natacha; Berstis, Laura; Wu, Xiaojing; ...

    2016-09-09

    In this paper, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesizedmore » by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated p-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. Finally, these four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.« less

  1. Electronic Coupling Calculations for Bridge-Mediated Charge Transfer Using Constrained Density Functional Theory (CDFT) and Effective Hamiltonian Approaches at the Density Functional Theory (DFT) and Fragment-Orbital Density Functional Tight Binding (FODFTB) Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillet, Natacha; Berstis, Laura; Wu, Xiaojing

    In this paper, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesizedmore » by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated p-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. Finally, these four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.« less

  2. Electronic Coupling Calculations for Bridge-Mediated Charge Transfer Using Constrained Density Functional Theory (CDFT) and Effective Hamiltonian Approaches at the Density Functional Theory (DFT) and Fragment-Orbital Density Functional Tight Binding (FODFTB) Level.

    PubMed

    Gillet, Natacha; Berstis, Laura; Wu, Xiaojing; Gajdos, Fruzsina; Heck, Alexander; de la Lande, Aurélien; Blumberger, Jochen; Elstner, Marcus

    2016-10-11

    In this article, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesized by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated π-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. These four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.

  3. New t-gap insertion-deletion-like metrics for DNA hybridization thermodynamic modeling.

    PubMed

    D'yachkov, Arkadii G; Macula, Anthony J; Pogozelski, Wendy K; Renz, Thomas E; Rykov, Vyacheslav V; Torney, David C

    2006-05-01

    We discuss the concept of t-gap block isomorphic subsequences and use it to describe new abstract string metrics that are similar to the Levenshtein insertion-deletion metric. Some of the metrics that we define can be used to model a thermodynamic distance function on single-stranded DNA sequences. Our model captures a key aspect of the nearest neighbor thermodynamic model for hybridized DNA duplexes. One version of our metric gives the maximum number of stacked pairs of hydrogen bonded nucleotide base pairs that can be present in any secondary structure in a hybridized DNA duplex without pseudoknots. Thermodynamic distance functions are important components in the construction of DNA codes, and DNA codes are important components in biomolecular computing, nanotechnology, and other biotechnical applications that employ DNA hybridization assays. We show how our new distances can be calculated by using a dynamic programming method, and we derive a Varshamov-Gilbert-like lower bound on the size of some of codes using these distance functions as constraints. We also discuss software implementation of our DNA code design methods.

  4. A single scan skeletonization algorithm: application to medical imaging of trabecular bone

    NASA Astrophysics Data System (ADS)

    Arlicot, Aurore; Amouriq, Yves; Evenou, Pierre; Normand, Nicolas; Guédon, Jean-Pierre

    2010-03-01

    Shape description is an important step in image analysis. The skeleton is used as a simple, compact representation of a shape. A skeleton represents the line centered in the shape and must be homotopic and one point wide. Current skeletonization algorithms compute the skeleton over several image scans, using either thinning algorithms or distance transforms. The principle of thinning is to delete points as one goes along, preserving the topology of the shape. On the other hand, the maxima of the local distance transform identifies the skeleton and is an equivalent way to calculate the medial axis. However, with this method, the skeleton obtained is disconnected so it is required to connect all the points of the medial axis to produce the skeleton. In this study we introduce a translated distance transform and adapt an existing distance driven homotopic algorithm to perform skeletonization with a single scan and thus allow the processing of unbounded images. This method is applied, in our study, on micro scanner images of trabecular bones. We wish to characterize the bone micro architecture in order to quantify bone integrity.

  5. Gauge-invariance and infrared divergences in the luminosity distance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biern, Sang Gyu; Yoo, Jaiyul, E-mail: sgbiern@physik.uzh.ch, E-mail: jyoo@physik.uzh.ch

    2017-04-01

    Measurements of the luminosity distance have played a key role in discovering the late-time cosmic acceleration. However, when accounting for inhomogeneities in the Universe, its interpretation has been plagued with infrared divergences in its theoretical predictions, which are in some cases used to explain the cosmic acceleration without dark energy. The infrared divergences in most calculations are artificially removed by imposing an infrared cut-off scale. We show that a gauge-invariant calculation of the luminosity distance is devoid of such divergences and consistent with the equivalence principle, eliminating the need to impose a cut-off scale. We present proper numerical calculations ofmore » the luminosity distance using the gauge-invariant expression and demonstrate that the numerical results with an ad hoc cut-off scale in previous calculations have negligible systematic errors as long as the cut-off scale is larger than the horizon scale. We discuss the origin of infrared divergences and their cancellation in the luminosity distance.« less

  6. Association between neighborhood need and spatial access to food stores and fast food restaurants in neighborhoods of Colonias

    PubMed Central

    Sharkey, Joseph R; Horel, Scott; Han, Daikwon; Huber, John C

    2009-01-01

    Objective To determine the extent to which neighborhood needs (socioeconomic deprivation and vehicle availability) are associated with two criteria of food environment access: 1) distance to the nearest food store and fast food restaurant and 2) coverage (number) of food stores and fast food restaurants within a specified network distance of neighborhood areas of colonias, using ground-truthed methods. Methods Data included locational points for 315 food stores and 204 fast food restaurants, and neighborhood characteristics from the 2000 U.S. Census for the 197 census block group (CBG) study area. Neighborhood deprivation and vehicle availability were calculated for each CBG. Minimum distance was determined by calculating network distance from the population-weighted center of each CBG to the nearest supercenter, supermarket, grocery, convenience store, dollar store, mass merchandiser, and fast food restaurant. Coverage was determined by calculating the number of each type of food store and fast food restaurant within a network distance of 1, 3, and 5 miles of each population-weighted CBG center. Neighborhood need and access were examined using Spearman ranked correlations, spatial autocorrelation, and multivariate regression models that adjusted for population density. Results Overall, neighborhoods had best access to convenience stores, fast food restaurants, and dollar stores. After adjusting for population density, residents in neighborhoods with increased deprivation had to travel a significantly greater distance to the nearest supercenter or supermarket, grocery store, mass merchandiser, dollar store, and pharmacy for food items. The results were quite different for association of need with the number of stores within 1 mile. Deprivation was only associated with fast food restaurants; greater deprivation was associated with fewer fast food restaurants within 1 mile. CBG with greater lack of vehicle availability had slightly better access to more supercenters or supermarkets, grocery stores, or fast food restaurants. Increasing deprivation was associated with decreasing numbers of grocery stores, mass merchandisers, dollar stores, and fast food restaurants within 3 miles. Conclusion It is important to understand not only the distance that people must travel to the nearest store to make a purchase, but also how many shopping opportunities they have in order to compare price, quality, and selection. Future research should examine how spatial access to the food environment influences the utilization of food stores and fast food restaurants, and the strategies used by low-income families to obtain food for the household. PMID:19220879

  7. Interstellar Travel

    NASA Astrophysics Data System (ADS)

    Rabayda, Adam; Keller, Luke

    Interstellar space travel is a topic that is often dismissed as highly unlikely due to the vast distances involved and to considerable engineering and socioeconomic challenges. Some are left believing that it may be far from possible for us, as a species, to go anywhere beyond our solar system. We demonstrate not only the possibility of covering interstellar distances in decades or less, but also that interstellar travel is possible (in principle) with existing technology. For example: Using only special relativity and calculus, we calculated that an interstellar spacecraft could reach the Andromeda Galaxy (2.5 Million light-years from Earth) in just over 28 years at an acceleration of 9 . 81m/s , which would emulate Earth gravity. We also calculated that the energy required for interstellar space travel, often deemed impossible with current technology, is, in fact, possible through certain methods such as nuclear fusion.

  8. A revised moving cluster distance to the Pleiades open cluster

    NASA Astrophysics Data System (ADS)

    Galli, P. A. B.; Moraux, E.; Bouy, H.; Bouvier, J.; Olivares, J.; Teixeira, R.

    2017-02-01

    Context. The distance to the Pleiades open cluster has been extensively debated in the literature over several decades. Although different methods point to a discrepancy in the trigonometric parallaxes produced by the Hipparcos mission, the number of individual stars with known distances is still small compared to the number of cluster members to help solve this problem. Aims: We provide a new distance estimate for the Pleiades based on the moving cluster method, which will be useful to further discuss the so-called Pleiades distance controversy and compare it with the very precise parallaxes from the Gaia space mission. Methods: We apply a refurbished implementation of the convergent point search method to an updated census of Pleiades stars to calculate the convergent point position of the cluster from stellar proper motions. Then, we derive individual parallaxes for 64 cluster members using radial velocities compiled from the literature, and approximate parallaxes for another 1146 stars based on the spatial velocity of the cluster. This represents the largest sample of Pleiades stars with individual distances to date. Results: The parallaxes derived in this work are in good agreement with previous results obtained in different studies (excluding Hipparcos) for individual stars in the cluster. We report a mean parallax of 7.44 ± 0.08 mas and distance of pc that is consistent with the weighted mean of 135.0 ± 0.6 pc obtained from the non-Hipparcos results in the literature. Conclusions: Our result for the distance to the Pleiades open cluster is not consistent with the Hipparcos catalog, but favors the recent and more precise distance determination of 136.2 ± 1.2 pc obtained from Very Long Baseline Interferometry observations. It is also in good agreement with the mean distance of 133 ± 5 pc obtained from the first trigonometric parallaxes delivered by the Gaia satellite for the brightest cluster members in common with our sample. Full Table B.2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A48

  9. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    PubMed Central

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  10. Theoretical study of actinide monocarbides (ThC, UC, PuC, and AmC)

    NASA Astrophysics Data System (ADS)

    Pogány, Peter; Kovács, Attila; Visscher, Lucas; Konings, Rudy J. M.

    2016-12-01

    A study of four representative actinide monocarbides, ThC, UC, PuC, and AmC, has been performed with relativistic quantum chemical calculations. The two applied methods were multireference complete active space second-order perturbation theory (CASPT2) including the Douglas-Kroll-Hess Hamiltonian with all-electron basis sets and density functional theory with the B3LYP exchange-correlation functional in conjunction with relativistic pseudopotentials. Beside the ground electronic states, the excited states up to 17 000 cm-1 have been determined. The molecular properties explored included the ground-state geometries, bonding properties, and the electronic absorption spectra. According to the occupation of the bonding orbitals, the calculated electronic states were classified into three groups, each leading to a characteristic bond distance range for the equilibrium geometry. The ground states of ThC, UC, and PuC have two doubly occupied π orbitals resulting in short bond distances between 1.8 and 2.0 Å, whereas the ground state of AmC has significant occupation of the antibonding orbitals, causing a bond distance of 2.15 Å.

  11. Image alignment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowell, Larry Jonathan

    Disclosed is a method and device for aligning at least two digital images. An embodiment may use frequency-domain transforms of small tiles created from each image to identify substantially similar, "distinguishing" features within each of the images, and then align the images together based on the location of the distinguishing features. To accomplish this, an embodiment may create equal sized tile sub-images for each image. A "key" for each tile may be created by performing a frequency-domain transform calculation on each tile. A information-distance difference between each possible pair of tiles on each image may be calculated to identify distinguishingmore » features. From analysis of the information-distance differences of the pairs of tiles, a subset of tiles with high discrimination metrics in relation to other tiles may be located for each image. The subset of distinguishing tiles for each image may then be compared to locate tiles with substantially similar keys and/or information-distance metrics to other tiles of other images. Once similar tiles are located for each image, the images may be aligned in relation to the identified similar tiles.« less

  12. Analysis of Franck-Condon factors for CO+ molecule using the Fourier Grid Hamiltonian method

    NASA Astrophysics Data System (ADS)

    Syiemiong, Arnestar; Swer, Shailes; Jha, Ashok Kumar; Saxena, Atul

    2018-04-01

    Franck-Condon factors (FCFs) are important parameters and it plays a very important role in determining the intensities of the vibrational bands in electronic transitions. In this paper, we illustrate the Fourier Grid Hamiltonian (FGH) method, a relatively simple method to calculate the FCFs. The FGH is a method used for calculating the vibrational eigenvalues and eigenfunctions of bound electronic states of diatomic molecules. The obtained vibrational wave functions for the ground and the excited states are used to calculate the vibrational overlap integral and then the FCFs. In this computation, we used the Morse potential and Bi-Exponential potential model for constructing and diagonalizing the molecular Hamiltonians. The effects of the change in equilibrium internuclear distance (xe), dissociation energy (De), and the nature of the excited state electronic energy curve on the FCFs have been determined. Here we present our work for the qualitative analysis of Franck-Condon Factorsusing this Fourier Grid Hamiltonian Method.

  13. A vision-based automated guided vehicle system with marker recognition for indoor use.

    PubMed

    Lee, Jeisung; Hyun, Chang-Ho; Park, Mignon

    2013-08-07

    We propose an intelligent vision-based Automated Guided Vehicle (AGV) system using fiduciary markers. In this paper, we explore a low-cost, efficient vehicle guiding method using a consumer grade web camera and fiduciary markers. In the proposed method, the system uses fiduciary markers with a capital letter or triangle indicating direction in it. The markers are very easy to produce, manipulate, and maintain. The marker information is used to guide a vehicle. We use hue and saturation values in the image to extract marker candidates. When the known size fiduciary marker is detected by using a bird's eye view and Hough transform, the positional relation between the marker and the vehicle can be calculated. To recognize the character in the marker, a distance transform is used. The probability of feature matching was calculated by using a distance transform, and a feature having high probability is selected as a captured marker. Four directional signals and 10 alphabet features are defined and used as markers. A 98.87% recognition rate was achieved in the testing phase. The experimental results with the fiduciary marker show that the proposed method is a solution for an indoor AGV system.

  14. Reducible dictionaries for single image super-resolution based on patch matching and mean shifting

    NASA Astrophysics Data System (ADS)

    Rasti, Pejman; Nasrollahi, Kamal; Orlova, Olga; Tamberg, Gert; Moeslund, Thomas B.; Anbarjafari, Gholamreza

    2017-03-01

    A single-image super-resolution (SR) method is proposed. The proposed method uses a generated dictionary from pairs of high resolution (HR) images and their corresponding low resolution (LR) representations. First, HR images and the corresponding LR ones are divided into patches of HR and LR, respectively, and then they are collected into separate dictionaries. Afterward, when performing SR, the distance between every patch of the input LR image and those of available LR patches in the LR dictionary is calculated. The minimum distance between the input LR patch and those in the LR dictionary is taken, and its counterpart from the HR dictionary is passed through an illumination enhancement process. By this technique, the noticeable change of illumination between neighbor patches in the super-resolved image is significantly reduced. The enhanced HR patch represents the HR patch of the super-resolved image. Finally, to remove the blocking effect caused by merging the patches, an average of the obtained HR image and the interpolated image obtained using bicubic interpolation is calculated. The quantitative and qualitative analyses show the superiority of the proposed technique over the conventional and state-of-art methods.

  15. Probing sunspots with two-skip time-distance helioseismology

    NASA Astrophysics Data System (ADS)

    Duvall, Thomas L., Jr.; Cally, Paul S.; Przybylski, Damien; Nagashima, Kaori; Gizon, Laurent

    2018-06-01

    Context. Previous helioseismology of sunspots has been sensitive to both the structural and magnetic aspects of sunspot structure. Aims: We aim to develop a technique that is insensitive to the magnetic component so the two aspects can be more readily separated. Methods: We study waves reflected almost vertically from the underside of a sunspot. Time-distance helioseismology was used to measure travel times for the waves. Ray theory and a detailed sunspot model were used to calculate travel times for comparison. Results: It is shown that these large distance waves are insensitive to the magnetic field in the sunspot. The largest travel time differences for any solar phenomena are observed. Conclusions: With sufficient modeling effort, these should lead to better understanding of sunspot structure.

  16. [The epidemiological validation of the MPEL for grain dust in the atmosphere].

    PubMed

    Pinigin, M A; Cherepov, E M; Safiulin, A A; Petrova, I V; Mukhambetova, L Kh; Osipova, E M; Veselov, A P

    1998-01-01

    The use of calculating and gravimetric methods for examining the grain dust pollution of the ambient air at the site of an elevator determined the maximum single, mean daily, and mean annual concentrations at different distances from the source of dust emission. The mean ratio of these concentrations was 12.1:4.3:1, respectively. The calculated concentration-effect and concentration-time relationships provided evidence for the maximum single, mean daily, and mean annual allowable concentrations for grain dust in the ambient air.

  17. Do breeding phase and detection distance influence the effective area surveyed for northern goshawks?

    USGS Publications Warehouse

    Roberson, A.M.; Andersen, D.E.; Kennedy, P.L.

    2005-01-01

    Broadcast surveys using conspecific calls are currently the most effective method for detecting northern goshawks (Accipiter gentilis) during the breeding season. These surveys typically use alarm calls during the nestling phase and juvenile food-begging calls during the fledgling-dependency phase. Because goshawks are most vocal during the courtship phase, we hypothesized that this phase would be an effective time to detect goshawks. Our objective was to improve current survey methodology by evaluating the probability of detecting goshawks at active nests in northern Minnesota in 3 breeding phases and at 4 broadcast distances and to determine the effective area surveyed per broadcast station. Unlike previous studies, we broadcast calls at only 1 distance per trial. This approach better quantifies (1) the relationship between distance and probability of detection, and (2) the effective area surveyed (EAS) per broadcast station. We conducted 99 broadcast trials at 14 active breeding areas. When pooled over all distances, detection rates were highest during the courtship (70%) and fledgling-dependency phases (68%). Detection rates were lowest during the nestling phase (28%), when there appeared to be higher variation in likelihood of detecting individuals. EAS per broadcast station was 39.8 ha during courtship and 24.8 ha during fledgling-dependency. Consequently, in northern Minnesota, broadcast stations may be spaced 712m and 562 m apart when conducting systematic surveys during courtship and fledgling-dependency, respectively. We could not calculate EAS for the nestling phase because probability of detection was not a simple function of distance from nest. Calculation of EAS could be applied to other areas where the probability of detection is a known function of distance.

  18. A new time domain random walk method for solute transport in 1-D heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banton, O.; Delay, F.; Porel, G.

    A new method to simulate solute transport in 1-D heterogeneous media is presented. This time domain random walk method (TDRW), similar in concept to the classical random walk method, calculates the arrival time of a particle cloud at a given location (directly providing the solute breakthrough curve). The main advantage of the method is that the restrictions on the space increments and the time steps which exist with the finite differences and random walk methods are avoided. In a homogeneous zone, the breakthrough curve (BTC) can be calculated directly at a given distance using a few hundred particles or directlymore » at the boundary of the zone. Comparisons with analytical solutions and with the classical random walk method show the reliability of this method. The velocity and dispersivity calculated from the simulated results agree within two percent with the values used as input in the model. For contrasted heterogeneous media, the random walk can generate high numerical dispersion, while the time domain approach does not.« less

  19. Determination of matrix composition based on solute-solute nearest-neighbor distances in atom probe tomography.

    PubMed

    De Geuser, F; Lefebvre, W

    2011-03-01

    In this study, we propose a fast automatic method providing the matrix concentration in an atom probe tomography (APT) data set containing two phases or more. The principle of this method relies on the calculation of the relative amount of isolated solute atoms (i.e., not surrounded by a similar solute atom) as a function of a distance d in the APT reconstruction. Simulated data sets have been generated to test the robustness of this new tool and demonstrate that rapid and reproducible results can be obtained without the need of any user input parameter. The method has then been successfully applied to a ternary Al-Zn-Mg alloy containing a fine dispersion of hardening precipitates. The relevance of this method for direct estimation of matrix concentration is discussed and compared with the existing methodologies. Copyright © 2010 Wiley-Liss, Inc.

  20. Determining health-care facility catchment areas in Uganda using data on malaria-related visits

    PubMed Central

    Charland, Katia; Kigozi, Ruth; Dorsey, Grant; Kamya, Moses R; Buckeridge, David L

    2014-01-01

    Abstract Objective To illustrate the use of a new method for defining the catchment areas of health-care facilities based on their utilization. Methods The catchment areas of six health-care facilities in Uganda were determined using the cumulative case ratio: the ratio of the observed to expected utilization of a facility for a particular condition by patients from small administrative areas. The cumulative case ratio for malaria-related visits to these facilities was determined using data from the Uganda Malaria Surveillance Project. Catchment areas were also derived using various straight line and road network distances from the facility. Subsequently, the 1-year cumulative malaria case rate was calculated for each catchment area, as determined using the three methods. Findings The 1-year cumulative malaria case rate varied considerably with the method used to define the catchment areas. With the cumulative case ratio approach, the catchment area could include noncontiguous areas. With the distance approaches, the denominator increased substantially with distance, whereas the numerator increased only slightly. The largest cumulative case rate per 1000 population was for the Kamwezi facility: 234.9 (95% confidence interval, CI: 226.2–243.8) for a straight-line distance of 5 km, 193.1 (95% CI: 186.8–199.6) for the cumulative case ratio approach and 156.1 (95% CI: 150.9–161.4) for a road network distance of 5 km. Conclusion Use of the cumulative case ratio for malaria-related visits to determine health-care facility catchment areas was feasible. Moreover, this approach took into account patients’ actual addresses, whereas using distance from the facility did not. PMID:24700977

  1. Genetic Algorithms and Their Application to the Protein Folding Problem

    DTIC Science & Technology

    1993-12-01

    and symbolic methods, random methods such as Monte Carlo simulation and simulated annealing, distance geometry, and molecular dynamics. Many of these...calculated energies with those obtained using the molecular simulation software package called CHARMm. 10 9) Test both the simple and parallel simpie genetic...homology-based, and simplification techniques. 3.21 Molecular Dynamics. Perhaps the most natural approach is to actually simulate the folding process. This

  2. Simulating the X-Ray Image Contrast to Set-Up Techniques with Desired Flaw Detectability

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2015-01-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is being developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing X-ray detector resolution for crack detection. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  3. Correlation between discrete probability and reaction front propagation rate in heterogeneous mixtures

    NASA Astrophysics Data System (ADS)

    Naine, Tarun Bharath; Gundawar, Manoj Kumar

    2017-09-01

    We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.

  4. Shape and structure of N=Z 64Ge: electromagnetic transition rates from the application of the recoil distance method to a knockout reaction.

    PubMed

    Starosta, K; Dewald, A; Dunomes, A; Adrich, P; Amthor, A M; Baumann, T; Bazin, D; Bowen, M; Brown, B A; Chester, A; Gade, A; Galaviz, D; Glasmacher, T; Ginter, T; Hausmann, M; Horoi, M; Jolie, J; Melon, B; Miller, D; Moeller, V; Norris, R P; Pissulla, T; Portillo, M; Rother, W; Shimbara, Y; Stolz, A; Vaman, C; Voss, P; Weisshaar, D; Zelevinsky, V

    2007-07-27

    Transition rate measurements are reported for the 2(1)+ and 2(2)+ states in N=Z 64Ge. The experimental results are in excellent agreement with large-scale shell-model calculations applying the recently developed GXPF1A interactions. The measurement was done using the recoil distance method (RDM) and a unique combination of state-of-the-art instruments at the National Superconducting Cyclotron Laboratory (NSCL). States of interest were populated via an intermediate-energy single-neutron knockout reaction. RDM studies of knockout and fragmentation reaction products hold the promise of reaching far from stability and providing lifetime information for excited states in a wide range of nuclei.

  5. Shape and Structure of N=Z Ge64: Electromagnetic Transition Rates from the Application of the Recoil Distance Method to a Knockout Reaction

    NASA Astrophysics Data System (ADS)

    Starosta, K.; Dewald, A.; Dunomes, A.; Adrich, P.; Amthor, A. M.; Baumann, T.; Bazin, D.; Bowen, M.; Brown, B. A.; Chester, A.; Gade, A.; Galaviz, D.; Glasmacher, T.; Ginter, T.; Hausmann, M.; Horoi, M.; Jolie, J.; Melon, B.; Miller, D.; Moeller, V.; Norris, R. P.; Pissulla, T.; Portillo, M.; Rother, W.; Shimbara, Y.; Stolz, A.; Vaman, C.; Voss, P.; Weisshaar, D.; Zelevinsky, V.

    2007-07-01

    Transition rate measurements are reported for the 21+ and 22+ states in N=Z Ge64. The experimental results are in excellent agreement with large-scale shell-model calculations applying the recently developed GXPF1A interactions. The measurement was done using the recoil distance method (RDM) and a unique combination of state-of-the-art instruments at the National Superconducting Cyclotron Laboratory (NSCL). States of interest were populated via an intermediate-energy single-neutron knockout reaction. RDM studies of knockout and fragmentation reaction products hold the promise of reaching far from stability and providing lifetime information for excited states in a wide range of nuclei.

  6. Generalized Mulliken-Hush analysis of electronic coupling interactions in compressed pi-stacked porphyrin-bridge-quinone systems.

    PubMed

    Zheng, Jieru; Kang, Youn K; Therien, Michael J; Beratan, David N

    2005-08-17

    Donor-acceptor interactions were investigated in a series of unusually rigid, cofacially compressed pi-stacked porphyrin-bridge-quinone systems. The two-state generalized Mulliken-Hush (GMH) approach was used to compute the coupling matrix elements. The theoretical coupling values evaluated with the GMH method were obtained from configuration interaction calculations using the INDO/S method. The results of this analysis are consistent with the comparatively soft distance dependences observed for both the charge separation and charge recombination reactions. Theoretical studies of model structures indicate that the phenyl units dominate the mediation of the donor-acceptor coupling and that the relatively weak exponential decay of rate with distance arises from the compression of this pi-electron stack.

  7. Target matching based on multi-view tracking

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Zhou, Changsheng

    2011-01-01

    A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.

  8. The Barnes-Evans color-surface brightness relation: A preliminary theoretical interpretation

    NASA Technical Reports Server (NTRS)

    Shipman, H. L.

    1980-01-01

    Model atmosphere calculations are used to assess whether an empirically derived relation between V-R and surface brightness is independent of a variety of stellar paramters, including surface gravity. This relationship is used in a variety of applications, including the determination of the distances of Cepheid variables using a method based on the Beade-Wesselink method. It is concluded that the use of a main sequence relation between V-R color and surface brightness in determining radii of giant stars is subject to systematic errors that are smaller than 10% in the determination of a radius or distance for temperature cooler than 12,000 K. The error in white dwarf radii determined from a main sequence color surface brightness relation is roughly 10%.

  9. Using distances between Top-n-gram and residue pairs for protein remote homology detection.

    PubMed

    Liu, Bin; Xu, Jinghao; Zou, Quan; Xu, Ruifeng; Wang, Xiaolong; Chen, Qingcai

    2014-01-01

    Protein remote homology detection is one of the central problems in bioinformatics, which is important for both basic research and practical application. Currently, discriminative methods based on Support Vector Machines (SVMs) achieve the state-of-the-art performance. Exploring feature vectors incorporating the position information of amino acids or other protein building blocks is a key step to improve the performance of the SVM-based methods. Two new methods for protein remote homology detection were proposed, called SVM-DR and SVM-DT. SVM-DR is a sequence-based method, in which the feature vector representation for protein is based on the distances between residue pairs. SVM-DT is a profile-based method, which considers the distances between Top-n-gram pairs. Top-n-gram can be viewed as a profile-based building block of proteins, which is calculated from the frequency profiles. These two methods are position dependent approaches incorporating the sequence-order information of protein sequences. Various experiments were conducted on a benchmark dataset containing 54 families and 23 superfamilies. Experimental results showed that these two new methods are very promising. Compared with the position independent methods, the performance improvement is obvious. Furthermore, the proposed methods can also provide useful insights for studying the features of protein families. The better performance of the proposed methods demonstrates that the position dependant approaches are efficient for protein remote homology detection. Another advantage of our methods arises from the explicit feature space representation, which can be used to analyze the characteristic features of protein families. The source code of SVM-DT and SVM-DR is available at http://bioinformatics.hitsz.edu.cn/DistanceSVM/index.jsp.

  10. Combining geostatistics with Moran's I analysis for mapping soil heavy metals in Beijing, China.

    PubMed

    Huo, Xiao-Ni; Li, Hong; Sun, Dan-Feng; Zhou, Lian-Di; Li, Bao-Guo

    2012-03-01

    Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran's I analysis was used to supplement the traditional geostatistics. According to Moran's I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran's I and the standardized Moran's I, Z(I) reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran's I analysis was better than traditional geostatistics. Thus, Moran's I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals.

  11. Combining Geostatistics with Moran’s I Analysis for Mapping Soil Heavy Metals in Beijing, China

    PubMed Central

    Huo, Xiao-Ni; Li, Hong; Sun, Dan-Feng; Zhou, Lian-Di; Li, Bao-Guo

    2012-01-01

    Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran’s I analysis was used to supplement the traditional geostatistics. According to Moran’s I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran’s I and the standardized Moran’s I, Z(I) reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran’s I analysis was better than traditional geostatistics. Thus, Moran’s I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals. PMID:22690179

  12. Distance-Dependent Multimodal Image Registration for Agriculture Tasks

    PubMed Central

    Berenstein, Ron; Hočevar, Marko; Godeša, Tone; Edan, Yael; Ben-Shahar, Ohad

    2015-01-01

    Image registration is the process of aligning two or more images of the same scene taken at different times; from different viewpoints; and/or by different sensors. This research focuses on developing a practical method for automatic image registration for agricultural systems that use multimodal sensory systems and operate in natural environments. While not limited to any particular modalities; here we focus on systems with visual and thermal sensory inputs. Our approach is based on pre-calibrating a distance-dependent transformation matrix (DDTM) between the sensors; and representing it in a compact way by regressing the distance-dependent coefficients as distance-dependent functions. The DDTM is measured by calculating a projective transformation matrix for varying distances between the sensors and possible targets. To do so we designed a unique experimental setup including unique Artificial Control Points (ACPs) and their detection algorithms for the two sensors. We demonstrate the utility of our approach using different experiments and evaluation criteria. PMID:26308000

  13. Development and Validation of a Path Length Calculation for Carotid-Femoral Pulse Wave Velocity Measurement: A TASCFORCE, SUMMIT, and Caerphilly Collaborative Venture.

    PubMed

    Weir-McCall, Jonathan R; Brown, Liam; Summersgill, Jennifer; Talarczyk, Piotr; Bonnici-Mallia, Michael; Chin, Sook C; Khan, Faisel; Struthers, Allan D; Sullivan, Frank; Colhoun, Helen M; Shore, Angela C; Aizawa, Kunihiko; Groop, Leif; Nilsson, Jan; Cockcroft, John R; McEniery, Carmel M; Wilkinson, Ian B; Ben-Shlomo, Yoav; Houston, J Graeme

    2018-05-01

    Current distance measurement techniques for pulse wave velocity (PWV) calculation are susceptible to intercenter variability. The aim of this study was to derive and validate a formula for this distance measurement. Based on carotid femoral distance in 1183 whole-body magnetic resonance angiograms, a formula was derived for calculating distance. This was compared with distance measurements in 128 whole-body magnetic resonance angiograms from a second study. The effects of recalculation of PWV using the new formula on association with risk factors, disease discrimination, and prediction of major adverse cardiovascular events were examined within 1242 participants from the multicenter SUMMIT study (Surrogate Markers of Micro- and Macrovascular Hard End-Points for Innovative Diabetes Tools) and 825 participants from the Caerphilly Prospective Study. The distance formula yielded a mean error of 7.8 mm (limits of agreement =-41.1 to 56.7 mm; P <0.001) compared with the second whole-body magnetic resonance angiogram group. Compared with an external distance measurement, the distance formula did not change associations between PWV and age, blood pressure, or creatinine ( P <0.01) but did remove significant associations between PWV and body mass index (BMI). After accounting for differences in age, sex, and mean arterial pressure, intercenter differences in PWV persisted using the external distance measurement ( F =4.6; P =0.004), whereas there was a loss of between center difference using the distance formula ( F =1.4; P =0.24). PWV odds ratios for cardiovascular mortality remained the same using both the external distance measurement (1.14; 95% confidence interval, 1.06-1.24; P =0.001) and the distance formula (1.17; 95% confidence interval, 1.08-1.28; P <0.001). A population-derived automatic distance calculation for PWV obtained from routinely collected clinical information is accurate and removes intercenter measurement variability without impacting the diagnostic utility of carotid-femoral PWV. © 2018 The Authors.

  14. Navy Nuclear Aircraft Carrier (CVN) Homeporting at Mayport: Background and Issues for Congress

    DTIC Science & Technology

    2012-02-21

    Pacific Ocean. The figure of about 32 nautical miles is the straight-line distance between the two locations, as calculated by the “How Far Is It?” online ...itself. 9 This is the straight-line distance between the two locations, as calculated by the “How Far Is It?” online distance calculator, available at...Authorization Act (S. 3001/P.L. 110- 417) Section 2207 of the FY2009 defense authorization bill as passed by the House (H.R. 5658; H.Rept. 110-652 of May

  15. Determination of facial symmetry in unilateral cleft lip and palate patients from three-dimensional data: technical report and assessment of measurement errors.

    PubMed

    Nkenke, Emeka; Lehner, Bernhard; Kramer, Manuel; Haeusler, Gerd; Benz, Stefanie; Schuster, Maria; Neukam, Friedrich W; Vairaktaris, Eleftherios G; Wurm, Jochen

    2006-03-01

    To assess measurement errors of a novel technique for the three-dimensional determination of the degree of facial symmetry in patients suffering from unilateral cleft lip and palate malformations. Technical report, reliability study. Cleft Lip and Palate Center of the University of Erlangen-Nuremberg, Erlangen, Germany. The three-dimensional facial surface data of five 10-year-old unilateral cleft lip and palate patients were subjected to the analysis. Distances, angles, surface areas, and volumes were assessed twice. Calculations were made for method error, intraclass correlation coefficient, and repeatability of the measurements of distances, angles, surface areas, and volumes. The method errors were less than 1 mm for distances and less than 1.5 degrees for angles. The intraclass correlation coefficients showed values greater than .90 for all parameters. The repeatability values were comparable for cleft and noncleft sides. The small method errors, high intraclass correlation coefficients, and comparable repeatability values for cleft and noncleft sides reveal that the new technique is appropriate for clinical use.

  16. Efficient visualization of urban spaces

    NASA Astrophysics Data System (ADS)

    Stamps, A. E.

    2012-10-01

    This chapter presents a new method for calculating efficiency and applies that method to the issues of selecting simulation media and evaluating the contextual fit of new buildings in urban spaces. The new method is called "meta-analysis". A meta-analytic review of 967 environments indicated that static color simulations are the most efficient media for visualizing urban spaces. For contextual fit, four original experiments are reported on how strongly five factors influence visual appeal of a street: architectural style, trees, height of a new building relative to the heights of existing buildings, setting back a third story, and distance. A meta-analysis of these four experiments and previous findings, covering 461 environments, indicated that architectural style, trees, and height had effects strong enough to warrant implementation, but the effects of setting back third stories and distance were too small to warrant implementation.

  17. Autonomous navigation method for substation inspection robot based on travelling deviation

    NASA Astrophysics Data System (ADS)

    Yang, Guoqing; Xu, Wei; Li, Jian; Fu, Chongguang; Zhou, Hao; Zhang, Chuanyou; Shao, Guangting

    2017-06-01

    A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.

  18. SLOPE STABILITY EVALUATION AND EQUIPMENT SETBACK DISTANCES FOR BURIAL GROUND EXCAVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MCSHANE DS

    2010-03-25

    After 1970 Transuranic (TRU) and suspect TRU waste was buried in the ground with the intention that at some later date the waste would be retrieved and processed into a configuration for long term storage. To retrieve this waste the soil must be removed (excavated). Sloping the bank of the excavation is the method used to keep the excavation from collapsing and to provide protection for workers retrieving the waste. The purpose of this paper is to document the minimum distance (setback) that equipment must stay from the edge of the excavation to maintain a stable slope. This evaluation examinesmore » the equipment setback distance by dividing the equipment into two categories, (1) equipment used for excavation and (2) equipment used for retrieval. The section on excavation equipment will also discuss techniques used for excavation including the process of benching. Calculations 122633-C-004, 'Slope Stability Analysis' (Attachment A), and 300013-C-001, 'Crane Stability Analysis' (Attachment B), have been prepared to support this evaluation. As shown in the calculations the soil has the following properties: Unit weight 110 pounds per cubic foot; and Friction Angle (natural angle of repose) 38{sup o} or 1.28 horizontal to 1 vertical. Setback distances are measured from the top edge of the slope to the wheels/tracks of the vehicles and heavy equipment being utilized. The computer program utilized in the calculation uses the center of the wheel or track load for the analysis and this difference is accounted for in this evaluation.« less

  19. Accessibility to primary health care in Belgium: an evaluation of policies awarding financial assistance in shortage areas

    PubMed Central

    2013-01-01

    Background In many countries, financial assistance is awarded to physicians who settle in an area that is designated as a shortage area to prevent unequal accessibility to primary health care. Today, however, policy makers use fairly simple methods to define health care accessibility, with physician-to-population ratios (PPRs) within predefined administrative boundaries being overwhelmingly favoured. Our purpose is to verify whether these simple methods are accurate enough for adequately designating medical shortage areas and explore how these perform relative to more advanced GIS-based methods. Methods Using a geographical information system (GIS), we conduct a nation-wide study of accessibility to primary care physicians in Belgium using four different methods: PPR, distance to closest physician, cumulative opportunity, and floating catchment area (FCA) methods. Results The official method used by policy makers in Belgium (calculating PPR per physician zone) offers only a crude representation of health care accessibility, especially because large contiguous areas (physician zones) are considered. We found substantial differences in the number and spatial distribution of medical shortage areas when applying different methods. Conclusions The assessment of spatial health care accessibility and concomitant policy initiatives are affected by and dependent on the methodology used. The major disadvantage of PPR methods is its aggregated approach, masking subtle local variations. Some simple GIS methods overcome this issue, but have limitations in terms of conceptualisation of physician interaction and distance decay. Conceptually, the enhanced 2-step floating catchment area (E2SFCA) method, an advanced FCA method, was found to be most appropriate for supporting areal health care policies, since this method is able to calculate accessibility at a small scale (e.g. census tracts), takes interaction between physicians into account, and considers distance decay. While at present in health care research methodological differences and modifiable areal unit problems have remained largely overlooked, this manuscript shows that these aspects have a significant influence on the insights obtained. Hence, it is important for policy makers to ascertain to what extent their policy evaluations hold under different scales of analysis and when different methods are used. PMID:23964751

  20. [Bloodstain pattern analysis on examples from practice: Are calculations with application parabolic trajectory usable?].

    PubMed

    Makovický, Peter; Matlach, Radek; Pokorná, Olga; Mošna, František; Makovický, Pavol

    2015-01-01

    The bloodstain pattern analysis (BPA) is useful in the forensic medicine. In Czechoslovakian criminology is this method not commonly used. The objective of this work is to calculate the impact length, height and distance splashing of blood drops. The results are compared with the real values for specific cases. It is also compared to calculate the angle of incidence of blood drops, using sinα with a form using tgα. For this purposes we used two different character cases from practice with well-preserved condition and readable blood stains. Selected blood stains were documented in order to calculate the angle of incidence of blood drops and to calculateorigin splashes. For this drop of blood, the distance of impact of the drops of blood (x), the height of the sprayed blood drops (y) and the length of the flight path the drop of blood (l). The obtained data was retrospectively analysed for the two models. The first straight line is represented by the triangle (M1) and the other is the parabolic model (M2). The formulae were derived using the Euler substitution. The results show that the angle of incidence of the drop of blood can be calculated as sinα and the tgα. When applying, the triangle is appropriate to consider the application and sinα parabolic requires the calculation of the angle of incidence drops of blood tgα. Parabola is useful for the BPA. In Czechoslovakian should be providing workplace training seminars BPA primarily intended for forensic investigators.We recommend the use of this method during investigations, verification of acts in forensic practice.

  1. Automated Determination of Magnitude and Source Length of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, D.; Kawakatsu, H.; Zhuang, J.; Mori, J. J.; Maeda, T.; Tsuruoka, H.; Zhao, X.

    2017-12-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus source duration time) depending on the epicenter distances. It may be a promising aid for disaster mitigation right after a damaging earthquake, especially when dealing with the tsunami evacuation and emergency rescue.

  2. Automated Determination of Magnitude and Source Extent of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Dun

    2017-04-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus source duration time) depending on the epicenter distances. It may be a promising aid for disaster mitigation right after a damaging earthquake, especially when dealing with the tsunami evacuation and emergency rescue.

  3. Evaluation of radio-tracking and strip transect methods for determining foraging ranges of Black-Legged Kittiwakes

    USGS Publications Warehouse

    Ostrand, William D.; Drew, G.S.; Suryan, R.M.; McDonald, L.L.

    1998-01-01

    We compared strip transect and radio-tracking methods of determining foraging range of Black-legged Kittiwakes (Rissa tridactyla). The mean distance birds were observed from their colony determined by radio-tracking was significantly greater than the mean value calculated from strip transects. We determined that this difference was due to two sources of bias: (1) as distance from the colony increased, the area of available habitat also increased resulting in decreasing bird densities (bird spreading). Consequently, the probability of detecting birds during transect surveys also would decrease as distance from the colony increased, and (2) the maximum distance birds were observed from the colony during radio-tracking exceeded the extent of the strip transect survey. We compared the observed number of birds seen on the strip transect survey to the predictions of a model of the decreasing probability of detection due to bird spreading. Strip transect data were significantly different from modeled data; however, the field data were consistently equal to or below the model predictions, indicating a general conformity to the concept of declining detection at increasing distance. We conclude that radio-tracking data gave a more representative indication of foraging distances than did strip transect sampling. Previous studies of seabirds that have used strip transect sampling without accounting for bird spreading or the effects of study-area limitations probably underestimated foraging range.

  4. An adaptive Fuzzy C-means method utilizing neighboring information for breast tumor segmentation in ultrasound images.

    PubMed

    Feng, Yuan; Dong, Fenglin; Xia, Xiaolong; Hu, Chun-Hong; Fan, Qianmin; Hu, Yanle; Gao, Mingyuan; Mutic, Sasa

    2017-07-01

    Ultrasound (US) imaging has been widely used in breast tumor diagnosis and treatment intervention. Automatic delineation of the tumor is a crucial first step, especially for the computer-aided diagnosis (CAD) and US-guided breast procedure. However, the intrinsic properties of US images such as low contrast and blurry boundaries pose challenges to the automatic segmentation of the breast tumor. Therefore, the purpose of this study is to propose a segmentation algorithm that can contour the breast tumor in US images. To utilize the neighbor information of each pixel, a Hausdorff distance based fuzzy c-means (FCM) method was adopted. The size of the neighbor region was adaptively updated by comparing the mutual information between them. The objective function of the clustering process was updated by a combination of Euclid distance and the adaptively calculated Hausdorff distance. Segmentation results were evaluated by comparing with three experts' manual segmentations. The results were also compared with a kernel-induced distance based FCM with spatial constraints, the method without adaptive region selection, and conventional FCM. Results from segmenting 30 patient images showed the adaptive method had a value of sensitivity, specificity, Jaccard similarity, and Dice coefficient of 93.60 ± 5.33%, 97.83 ± 2.17%, 86.38 ± 5.80%, and 92.58 ± 3.68%, respectively. The region-based metrics of average symmetric surface distance (ASSD), root mean square symmetric distance (RMSD), and maximum symmetric surface distance (MSSD) were 0.03 ± 0.04 mm, 0.04 ± 0.03 mm, and 1.18 ± 1.01 mm, respectively. All the metrics except sensitivity were better than that of the non-adaptive algorithm and the conventional FCM. Only three region-based metrics were better than that of the kernel-induced distance based FCM with spatial constraints. Inclusion of the pixel neighbor information adaptively in segmenting US images improved the segmentation performance. The results demonstrate the potential application of the method in breast tumor CAD and other US-guided procedures. © 2017 American Association of Physicists in Medicine.

  5. A study of polaritonic transparency in couplers made from excitonic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Mahi R.; Racknor, Chris

    2015-03-14

    We have studied light matter interaction in quantum dot and exciton-polaritonic coupler hybrid systems. The coupler is made by embedding two slabs of an excitonic material (CdS) into a host excitonic material (ZnO). An ensemble of non-interacting quantum dots is doped in the coupler. The bound exciton polariton states are calculated in the coupler using the transfer matrix method in the presence of the coupling between the external light (photons) and excitons. These bound exciton-polaritons interact with the excitons present in the quantum dots and the coupler is acting as a reservoir. The Schrödinger equation method has been used tomore » calculate the absorption coefficient in quantum dots. It is found that when the distance between two slabs (CdS) is greater than decay length of evanescent waves the absorption spectrum has two peaks and one minimum. The minimum corresponds to a transparent state in the system. However, when the distance between the slabs is smaller than the decay length of evanescent waves, the absorption spectra has three peaks and two transparent states. In other words, one transparent state can be switched to two transparent states when the distance between the two layers is modified. This could be achieved by applying stress and strain fields. It is also found that transparent states can be switched on and off by applying an external control laser field.« less

  6. Numerical Calculation of Non-uniform Magnetization Using Experimental Magnetic Field Data

    NASA Astrophysics Data System (ADS)

    Jhun, Bukyoung; Jhun, Youngseok; Kim, Seung-wook; Han, JungHyun

    2018-05-01

    A relation between the distance from the surface of a magnet and the number of cells required for a numerical calculation in order to secure the error below a certain threshold is derived. We also developed a method to obtain the magnetization at each part of the magnet from the experimentally measured magnetic field. This method is applied to three magnets with distinct patterns on magnetic-field-viewing film. Each magnet showed a unique pattern of magnetization. We found that the magnet that shows symmetric magnetization on the magnetic-field-viewing film is not uniformly magnetized. This method can be useful comparing the magnetization between magnets that yield typical magnetic field and those that yield atypical magnetic field.

  7. Object-based change detection method using refined Markov random field

    NASA Astrophysics Data System (ADS)

    Peng, Daifeng; Zhang, Yongjun

    2017-01-01

    In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.

  8. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL

    PubMed Central

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively. PMID:29051701

  9. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.

    PubMed

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  10. Accessibility to primary health care in Belgium: an evaluation of policies awarding financial assistance in shortage areas.

    PubMed

    Dewulf, Bart; Neutens, Tijs; De Weerdt, Yves; Van de Weghe, Nico

    2013-08-22

    In many countries, financial assistance is awarded to physicians who settle in an area that is designated as a shortage area to prevent unequal accessibility to primary health care. Today, however, policy makers use fairly simple methods to define health care accessibility, with physician-to-population ratios (PPRs) within predefined administrative boundaries being overwhelmingly favoured. Our purpose is to verify whether these simple methods are accurate enough for adequately designating medical shortage areas and explore how these perform relative to more advanced GIS-based methods. Using a geographical information system (GIS), we conduct a nation-wide study of accessibility to primary care physicians in Belgium using four different methods: PPR, distance to closest physician, cumulative opportunity, and floating catchment area (FCA) methods. The official method used by policy makers in Belgium (calculating PPR per physician zone) offers only a crude representation of health care accessibility, especially because large contiguous areas (physician zones) are considered. We found substantial differences in the number and spatial distribution of medical shortage areas when applying different methods. The assessment of spatial health care accessibility and concomitant policy initiatives are affected by and dependent on the methodology used. The major disadvantage of PPR methods is its aggregated approach, masking subtle local variations. Some simple GIS methods overcome this issue, but have limitations in terms of conceptualisation of physician interaction and distance decay. Conceptually, the enhanced 2-step floating catchment area (E2SFCA) method, an advanced FCA method, was found to be most appropriate for supporting areal health care policies, since this method is able to calculate accessibility at a small scale (e.g., census tracts), takes interaction between physicians into account, and considers distance decay. While at present in health care research methodological differences and modifiable areal unit problems have remained largely overlooked, this manuscript shows that these aspects have a significant influence on the insights obtained. Hence, it is important for policy makers to ascertain to what extent their policy evaluations hold under different scales of analysis and when different methods are used.

  11. Pair-correlation function of a metastable helium Bose-Einstein condensate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zin, Pawel; Trippenbach, Marek; Gajda, Mariusz

    2004-02-01

    The pair-correlation function is one of the basic quantities to characterize the coherence properties of a Bose-Einstein condensate. We calculate this function in the experimentally important case of a zero temperature Bose-Einstein condensate in a metastable triplet helium state using the variational method with a pair-excitation ansatz. We compare our result with a pair-correlation function obtained for the hard-sphere potential with the same scattering length. Both functions are practically indistinguishable for distances greater than the scattering length. At smaller distances, due to interatomic interactions, the helium condensate shows strong correlations.

  12. Performance evaluation of the zero-multipole summation method in modern molecular dynamics software.

    PubMed

    Sakuraba, Shun; Fukuda, Ikuo

    2018-05-04

    The zero-multiple summation method (ZMM) is a cutoff-based method for calculating electrostatic interactions in molecular dynamics simulations, utilizing an electrostatic neutralization principle as a physical basis. Since the accuracies of the ZMM have been revealed to be sufficient in previous studies, it is highly desirable to clarify its practical performance. In this paper, the performance of the ZMM is compared with that of the smooth particle mesh Ewald method (SPME), where the both methods are implemented in molecular dynamics software package GROMACS. Extensive performance comparisons against a highly optimized, parameter-tuned SPME implementation are performed for various-sized water systems and two protein-water systems. We analyze in detail the dependence of the performance on the potential parameters and the number of CPU cores. Even though the ZMM uses a larger cutoff distance than the SPME does, the performance of the ZMM is comparable to or better than that of the SPME. This is because the ZMM does not require a time-consuming electrostatic convolution and because the ZMM gains short neighbor-list distances due to the smooth damping feature of the pairwise potential function near the cutoff length. We found, in particular, that the ZMM with quadrupole or octupole cancellation and no damping factor is an excellent candidate for the fast calculation of electrostatic interactions. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  13. VOFTools - A software package of calculation tools for volume of fluid methods using general convex grids

    NASA Astrophysics Data System (ADS)

    López, J.; Hernández, J.; Gómez, P.; Faura, F.

    2018-02-01

    The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.

  14. Investigation of thermodynamic and mechanical properties of AlyIn1-yP alloys by statistical moment method

    NASA Astrophysics Data System (ADS)

    Ha, Vu Thi Thanh; Hung, Vu Van; Hanh, Pham Thi Minh; Tuyen, Nguyen Viet; Hai, Tran Thi; Hieu, Ho Khac

    2018-03-01

    The thermodynamic and mechanical properties of III-V zinc-blende AlP, InP semiconductors and their alloys have been studied in detail from statistical moment method taking into account the anharmonicity effects of the lattice vibrations. The nearest neighbor distance, thermal expansion coefficient, bulk moduli, specific heats at the constant volume and constant pressure of the zincblende AlP, InP and AlyIn1-yP alloys are calculated as functions of the temperature. The statistical moment method calculations are performed by using the many-body Stillinger-Weber potential. The concentration dependences of the thermodynamic quantities of zinc-blende AlyIn1-yP crystals have also been discussed and compared with those of the experimental results. Our results are reasonable agreement with earlier density functional theory calculations and can provide useful qualitative information for future experiments. The moment method then can be developed extensively for studying the atomistic structure and thermodynamic properties of nanoscale materials as well.

  15. Measurements and analyses of the distribution of the radioactivity induced by the secondary neutrons produced by 17-MeV protons in compact cyclotron facility

    NASA Astrophysics Data System (ADS)

    Matsuda, Norihiro; Izumi, Yuichi; Yamanaka, Yoshiyuki; Gandou, Toshiyuki; Yamada, Masaaki; Oishi, Koji

    2017-09-01

    Measurements of reaction rates by secondary neutrons produced from beam losses by 17-MeV protons are conducted at a compact cyclotron facility with the foil activation method. The experimentally obtained distribution of the reaction rates of 197Au (n, γ) 198Au on the concrete walls suggests that a target and an electrostatic deflector as machine components for beam extraction of the compact cyclotron are principal beam loss points. The measurements are compared with calculations by the Monte Carlo code: PHITS. The calculated results based on the beam losses are good agreements with the measured ones within 21%. In this compact cyclotron facility, exponential attenuations with the distance from the electrostatic deflector in the distributions of the measured reaction rates were observed, which was looser than that by the inverse square of distance.

  16. Dissolution comparisons using a Multivariate Statistical Distance (MSD) test and a comparison of various approaches for calculating the measurements of dissolution profile comparison.

    PubMed

    Cardot, J-M; Roudier, B; Schütz, H

    2017-07-01

    The f 2 test is generally used for comparing dissolution profiles. In cases of high variability, the f 2 test is not applicable, and the Multivariate Statistical Distance (MSD) test is frequently proposed as an alternative by the FDA and EMA. The guidelines provide only general recommendations. MSD tests can be performed either on raw data with or without time as a variable or on parameters of models. In addition, data can be limited-as in the case of the f 2 test-to dissolutions of up to 85% or to all available data. In the context of the present paper, the recommended calculation included all raw dissolution data up to the first point greater than 85% as a variable-without the various times as parameters. The proposed MSD overcomes several drawbacks found in other methods.

  17. Measurement of molecular length of self-assembled monolayer probed by localized surface plasmon resonance

    NASA Astrophysics Data System (ADS)

    Ito, Juri; Kajikawa, Kotaro

    2016-02-01

    We propose a method to measure the variation of the molecular length of self-assembled monolayers (SAMs) when it is exposed to solutions at different pH conditions. The surface immobilized gold nanospheres (SIGNs) shows strong absorption peak at the wavelengths of 600-800 nm when p-polarized light is illuminated. The peak wavelength depends on the length of the gap distance between the SIGNs and the substrate. The gap is supported by the SAM molecules. According to the analytical calculation based on multiple expansion, the relation between the peak wavelength of the SIGN structures and the gap distance is calculated, to evaluate the molecular length of the SAM through the optical absorption spectroscopy for the SIGN structures. The molecular length of the SIGN structure was measured in air, water, acidic, and basic solutions. It was found that the molecular lengths are longer in acidic solutions.

  18. Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.

  19. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion

    PubMed Central

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  20. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.

    PubMed

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-08-31

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.

  1. On the vibrational spectra and structural parameters of methyl, silyl, and germyl azide from theoretical predictions and experimental data.

    PubMed

    Durig, Douglas T; Durig, M S; Durig, James R

    2005-05-01

    The infrared and Raman spectra of methyl, silyl, and germyl azide (XN3 where X=CH3, SiH3 and GeH3) have been predicted from ab initio calculations with full electron correlation by second order perturbation theory (MP2) and hybrid density function theory (DFT) by the B3LYP method with a variety of basis sets. These predicted data are compared to previously reported experimental data and complete vibrational assignments are provided for all three molecules. It is shown that several of the assignments recently proposed [J. Mol. Struct. (Theochem.) 434 (1998) 1] for methyl azide are not correct. Structural parameters for CH3N3 and GeH3N3 have been obtained by combining the previously reported microwave rotational constants with the ab initio MP2/6-311+G(d,p) predicted values. These "adjusted r0" parameters have very small uncertainties of +/-0.003 A for the XH distances and a maximum of +/-0.005 A for the heavy atom distances and +/-0.5 degrees for the angles. The predicted distance for the terminal NN bond which is nearly a triple bond is much better predicted by the B3LYP calculations, whereas the fundamental frequencies are better predicted by the scaled ab initio calculations. The results are discussed and compared to those obtained for some similar molecules.

  2. Magnetic ordering in intermetallic La1-xTbxMn2Si2 compounds

    NASA Astrophysics Data System (ADS)

    Korotin, Dm. M.; Streltsov, S. V.; Gerasimov, E. G.; Mushnikov, N. V.; Zhidkov, I. S.; Kukharenko, A. I.; Finkelstein, L. D.; Cholakh, S. O.; Kurmaev, E. Z.

    2018-05-01

    The magnetic structures and magnetic phase transitions in intermetallic layered La1-xTbxMn2Si2 compounds (the ThCr2Si2-type structure) are investigated using the first-principles method and XPS measurements. The experimentally observed transition from ferromagnetic (FM) to antiferromagnetic (AFM) ordering of Mn sublattice with increase of terbium concentration is successfully reproduced in calculations for collinear magnetic moments model. The FM →AFM change of interplane magnetic ordering at small x is irrelevant to the number of f-electrons of the rare-earth ion. In contrast it was shown to be related to the Mn-Mn in-plane distance. Calculated Tb critical concentration for this transition x ≈ 0.14 corresponds to the Mn-Mn in-plane distance 0.289 nm, very close to the experimentally observed transition distance 0.287 nm. The crystal cell compression due to substitution increases an overlap between Mndxz,yz and the rare-earth ion d orbitals. Resulting hybridized states manifest themselves as an additional peak in the density of states. We suggest that a corresponding interlayer Mn-R-Mn superexchange interaction stabilizes AFM magnetic ordering in these compounds with Tb doping level x > 0.2 . The results of DFT calculations are in agreement with X-ray photoemission spectra for La1-xTbxMn2Si2 .

  3. A Tactical Database for the Low Cost Combat Direction System

    DTIC Science & Technology

    1990-12-01

    another object. Track is a representation of some environmental phenomena converted into accurate estimates of geographical position with respect to...by the method CALCULATE RELATIVE POSITION. In order to obtain a better similarity of mehods , the methods OWNSHIP DISTANCE TO PIM, ESTIMATED TIME OF...this mechanism entails the risk that the user will lose all of the work that was done if conflicts are detected and the transaction cannot be committed

  4. Passage of a star by a massive black hole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noltenius, R.A.; Katz, J.I.

    1982-12-01

    We have calculated the effects on a 1 M/sub sun/ star of passage by a 10/sup 4/ M/sub sun/ point mass (black hole) in an intially parabolic orbit with a variety of pericenter distances. Because this problem is three-dimensional, we use the gridless smoothed particle hydrodynamic method of Lucy, Gingold, and Monaghan. The tidal forces are found to induce rotation and radial and nonradial pulsations of the star. The loss of orbital energy to these internal motions leads to capture of the star by the black hole. For small pericenter distances, the outer layers of the star are disrupted, whilemore » at still smaller distances, the entire star is destroyed. These results may be applied to some X-ray sources, active galactic nuclei, and quasars.« less

  5. [A Study of the Relationship Among Genetic Distances, NIR Spectra Distances, and NIR-Based Identification Model Performance of the Seeds of Maize Iinbred Lines].

    PubMed

    Liu, Xu; Jia, Shi-qiang; Wang, Chun-ying; Liu, Zhe; Gu, Jian-cheng; Zhai, Wei; Li, Shao-ming; Zhang, Xiao-dong; Zhu, De-hai; Huang, Hua-jun; An, Dong

    2015-09-01

    This paper explored the relationship among genetic distances, NIR spectra distances and NIR-based identification model performance of the seeds of maize inbred lines. Using 3 groups (total 15 pairs) of maize inbred lines whose genetic distaches are different as experimental materials, we calculates the genetic distance between these seeds with SSR markers and uses Euclidean distance between distributed center points of maize NIR spectrum in the PCA space as the distances of NIR spectrum. BPR method is used to build identification model of inbred lines and the identification accuracy is used as a measure of model identification performance. The results showed that, the correlation of genetic distance and spectra distancesis 0.9868, and it has a correlation of 0.9110 with the identification accuracy, which is highly correlated. This means near-Infrared spectrum of seedscan reflect genetic relationship of maize inbred lines. The smaller the genetic distance, the smaller the distance of spectrum, the poorer ability of model to identify. In practical application, near infrared spectrum analysis technology has the potential to be used to analyze maize inbred genetic relations, contributing much to genetic breeding, identification of species, purity sorting and so on. What's more, when creating a NIR-based identification model, the impact of the maize inbred lines which have closer genetic relationship should be fully considered.

  6. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    PubMed Central

    Liu, Jingxian; Wu, Kefeng

    2017-01-01

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with traditional spectral clustering and fast affinity propagation clustering. Experimental results have illustrated its superior performance in terms of quantitative and qualitative evaluations. PMID:28777353

  7. Transverse Densities of Octet Baryons from Chiral Effective Field Theory

    DOE PAGES

    Alarcón, Jose Manuel; Hiller Blin, Astrid N.; Weiss, Christian

    2017-03-24

    Transverse densities describe the distribution of charge and current at fixed light-front time and provide a frame-independent spatial representation of hadrons as relativistic systems. In this paper, we calculate the transverse densities of the octet baryons at peripheral distances b=O(M π -1) in an approach that combines chiral effective field theory (χχEFT) and dispersion analysis. The densities are represented as dispersive integrals of the imaginary parts of the baryon electromagnetic form factors in the timelike region (spectral functions). The spectral functions on the two-pion cut at t>4Mmore » $$2\\atop{π}$$ are computed using relativistic χEFT with octet and decuplet baryons in the extended on-mass-shell renormalization scheme. The calculations are extended into the ρ-meson mass region using a dispersive method that incorporates the timelike pion form-factor data. The approach allows us to construct densities at distances b>1 fm with controlled uncertainties. Finally, our results provide insight into the peripheral structure of nucleons and hyperons and can be compared with empirical densities and lattice-QCD calculations.« less

  8. Google Maps offers a new way to evaluate claudication.

    PubMed

    Khambati, Husain; Boles, Kim; Jetty, Prasad

    2017-05-01

    Accurate determination of walking capacity is important for the clinical diagnosis and management plan for patients with peripheral arterial disease. The current "gold standard" of measurement is walking distance on a treadmill. However, treadmill testing is not always reflective of the patient's natural walking conditions, and it may not be fully accessible in every vascular clinic. The objective of this study was to determine whether Google Maps, the readily available GPS-based mapping tool, offers an accurate and accessible method of evaluating walking distances in vascular claudication patients. Patients presenting to the outpatient vascular surgery clinic between November 2013 and April 2014 at the Ottawa Hospital with vasculogenic calf, buttock, and thigh claudication symptoms were identified and prospectively enrolled in our study. Onset of claudication symptoms and maximal walking distance (MWD) were evaluated using four tools: history; Walking Impairment Questionnaire (WIQ), a validated claudication survey; Google Maps distance calculator (patients were asked to report their daily walking routes on the Google Maps-based tool runningmap.com, and walking distances were calculated accordingly); and treadmill testing for onset of symptoms and MWD, recorded in a double-blinded fashion. Fifteen patients were recruited for the study. Determination of walking distances using Google Maps proved to be more accurate than by both clinical history and WIQ, correlating highly with the gold standard of treadmill testing for both claudication onset (r = .805; P < .001) and MWD (r = .928; P < .0001). In addition, distances were generally under-reported on history and WIQ. The Google Maps tool was also efficient, with reporting times averaging below 4 minutes. For vascular claudicants with no other walking limitations, Google Maps is a promising new tool that combines the objective strengths of the treadmill test and incorporates real-world walking environments. It offers an accurate, efficient, inexpensive, and readily accessible way to assess walking distances in patients with peripheral vascular disease. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  9. Feature selection from a facial image for distinction of sasang constitution.

    PubMed

    Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun; Kim, Keun Ho

    2009-09-01

    Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.

  10. Feature Selection from a Facial Image for Distinction of Sasang Constitution

    PubMed Central

    Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun

    2009-01-01

    Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here. PMID:19745013

  11. A Wall-Distance-Free k-ω SST Turbulence Model

    NASA Astrophysics Data System (ADS)

    Gleize, Vincent; Burnley, Victor

    2001-11-01

    In the calculation of flows around aircraft and aerodynamic bodies, the Shear-Stress Transport (SST) model by Menter has been used extensively due to its good prediction of flows with adverse pressure gradients. One main drawback of this model is the need to calculate the distance from the wall. While this is not a serious drawback for steady state calculations on non-moving grids, this calculation can become very cumbersome and expensive for unsteady simulations, especially when using unstructured grids. In this case, the wall-distance needs to be determined after each iteration. To avoid this problem, a new model is proposed which provides the benefits of the SST correction and avoids the freestream dependency of the solution, while not requiring the wall-distance. The first results for a wide range of test cases show that this model produces very good agreement with experimental data for flows with adverse pressure gradients, separation zones and shock-boundary layer interactions, closely matching the results obtained with the original SST model. This model should be very useful for unsteady calculations, such as store separation, grid adaptation, and other practical flows.

  12. Determination of GMPE functional form for an active region with limited strong motion data: application to the Himalayan region

    NASA Astrophysics Data System (ADS)

    Bajaj, Ketan; Anbazhagan, P.

    2018-01-01

    Advancement in the seismic networks results in formulation of different functional forms for developing any new ground motion prediction equation (GMPE) for a region. Till date, various guidelines and tools are available for selecting a suitable GMPE for any seismic study area. However, these methods are efficient in quantifying the GMPE but not for determining a proper functional form and capturing the epistemic uncertainty associated with selection of GMPE. In this study, the compatibility of the recent available functional forms for the active region is tested for distance and magnitude scaling. Analysis is carried out by determining the residuals using the recorded and the predicted spectral acceleration values at different periods. Mixed effect regressions are performed on the calculated residuals for determining the intra- and interevent residuals. Additionally, spatial correlation is used in mixed effect regression by changing its likelihood function. Distance scaling and magnitude scaling are respectively examined by studying the trends of intraevent residuals with distance and the trend of the event term with magnitude. Further, these trends are statistically studied for a respective functional form of a ground motion. Additionally, genetic algorithm and Monte Carlo method are used respectively for calculating the hinge point and standard error for magnitude and distance scaling for a newly determined functional form. The whole procedure is applied and tested for the available strong motion data for the Himalayan region. The functional form used for testing are five Himalayan GMPEs, five GMPEs developed under NGA-West 2 project, two from Pan-European, and one from Japan region. It is observed that bilinear functional form with magnitude and distance hinged at 6.5 M w and 300 km respectively is suitable for the Himalayan region. Finally, a new regression coefficient for peak ground acceleration for a suitable functional form that governs the attenuation characteristic of the Himalayan region is derived.

  13. Measuring geographical accessibility to palliative and end of life (PEoLC) related facilities: a comparative study in an area with well-developed specialist palliative care (SPC) provision.

    PubMed

    Pearson, Clare; Verne, Julia; Wells, Claudia; Polato, Giovanna M; Higginson, Irene J; Gao, Wei

    2017-01-26

    Geographical accessibility is important in accessing healthcare services. Measuring it has evolved alongside technological and data analysis advances. High correlations between different methods have been detected, but no comparisons exist in the context of palliative and end of life care (PEoLC) studies. To assess how geographical accessibility can affect PEoLC, selection of an appropriate method to capture it is crucial. We therefore aimed to compare methods of measuring geographical accessibility of decedents to PEoLC-related facilities in South London, an area with well-developed SPC provision. Individual-level death registration data in 2012 (n = 18,165), from the Office for National Statistics (ONS) were linked to area-level PEoLC-related facilities from various sources. Simple and more complex measures of geographical accessibility were calculated using the residential postcodes of the decedents and postcodes of the nearest hospital, care home and hospice. Distance measures (straight-line, travel network) and travel times along the road network were compared using geographic information system (GIS) mapping and correlation analysis (Spearman rho). Borough-level maps demonstrate similarities in geographical accessibility measures. Strong positive correlation exist between straight-line and travel distances to the nearest hospital (rho = 0.97), care home (rho = 0.94) and hospice (rho = 0.99). Travel times were also highly correlated with distance measures to the nearest hospital (rho range = 0.84-0.88), care home (rho = 0.88-0.95) and hospice (rho = 0.93-0.95). All correlations were significant at p < 0.001 level. Distance-based and travel-time measures of geographical accessibility to PEoLC-related facilities in South London are similar, suggesting the choice of measure can be based on the ease of calculation.

  14. OpenACC directive-based GPU acceleration of an implicit reconstructed discontinuous Galerkin method for compressible flows on 3D unstructured grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lou, Jialin; Xia, Yidong; Luo, Lixiang

    2016-09-01

    In this study, we use a combination of modeling techniques to describe the relationship between fracture radius that might be accomplished in a hypothetical enhanced geothermal system (EGS) and drilling distance required to create and access those fractures. We use a combination of commonly applied analytical solutions for heat transport in parallel fractures and 3D finite-element method models of more realistic heat extraction geometries. For a conceptual model involving multiple parallel fractures developed perpendicular to an inclined or horizontal borehole, calculations demonstrate that EGS will likely require very large fractures, of greater than 300 m radius, to keep interfracture drillingmore » distances to ~10 km or less. As drilling distances are generally inversely proportional to the square of fracture radius, drilling costs quickly escalate as the fracture radius decreases. It is important to know, however, whether fracture spacing will be dictated by thermal or mechanical considerations, as the relationship between drilling distance and number of fractures is quite different in each case. Information about the likelihood of hydraulically creating very large fractures comes primarily from petroleum recovery industry data describing hydraulic fractures in shale. Those data suggest that fractures with radii on the order of several hundred meters may, indeed, be possible. The results of this study demonstrate that relatively simple calculations can be used to estimate primary design constraints on a system, particularly regarding the relationship between generated fracture radius and the total length of drilling needed in the fracture creation zone. Comparison of the numerical simulations of more realistic geometries than addressed in the analytical solutions suggest that simple proportionalities can readily be derived to relate a particular flow field.« less

  15. GENOME-WIDE COMPARATIVE ANALYSIS OF PHYLOGENETIC TREES: THE PROKARYOTIC FOREST OF LIFE

    PubMed Central

    Puigbò, Pere; Wolf, Yuri I.; Koonin, Eugene V.

    2013-01-01

    Genome-wide comparison of phylogenetic trees is becoming an increasingly common approach in evolutionary genomics, and a variety of approaches for such comparison have been developed. In this article we present several methods for comparative analysis of large numbers of phylogenetic trees. To compare phylogenetic trees taking into account the bootstrap support for each internal branch, the Boot-Split Distance (BSD) method is introduced as an extension of the previously developed Split Distance (SD) method for tree comparison. The BSD method implements the straightforward idea that comparison of phylogenetic trees can be made more robust by treating tree splits differentially depending on the bootstrap support. Approaches are also introduced for detecting tree-like and net-like evolutionary trends in the phylogenetic Forest of Life (FOL), i.e., the entirety of the phylogenetic trees for conserved genes of prokaryotes. The principal method employed for this purpose includes mapping quartets of species onto trees to calculate the support of each quartet topology and so to quantify the tree and net contributions to the distances between species. We describe the applications methods used to analyze the FOL and the results obtained with these methods. These results support the concept of the Tree of Life (TOL) as a central evolutionary trend in the FOL as opposed to the traditional view of the TOL as a ‘species tree’. PMID:22399455

  16. Genome-wide comparative analysis of phylogenetic trees: the prokaryotic forest of life.

    PubMed

    Puigbò, Pere; Wolf, Yuri I; Koonin, Eugene V

    2012-01-01

    Genome-wide comparison of phylogenetic trees is becoming an increasingly common approach in evolutionary genomics, and a variety of approaches for such comparison have been developed. In this article, we present several methods for comparative analysis of large numbers of phylogenetic trees. To compare phylogenetic trees taking into account the bootstrap support for each internal branch, the Boot-Split Distance (BSD) method is introduced as an extension of the previously developed Split Distance method for tree comparison. The BSD method implements the straightforward idea that comparison of phylogenetic trees can be made more robust by treating tree splits differentially depending on the bootstrap support. Approaches are also introduced for detecting tree-like and net-like evolutionary trends in the phylogenetic Forest of Life (FOL), i.e., the entirety of the phylogenetic trees for conserved genes of prokaryotes. The principal method employed for this purpose includes mapping quartets of species onto trees to calculate the support of each quartet topology and so to quantify the tree and net contributions to the distances between species. We describe the application of these methods to analyze the FOL and the results obtained with these methods. These results support the concept of the Tree of Life (TOL) as a central evolutionary trend in the FOL as opposed to the traditional view of the TOL as a "species tree."

  17. Size and shape measurement in contemporary cephalometrics.

    PubMed

    McIntyre, Grant T; Mossey, Peter A

    2003-06-01

    The traditional method of analysing cephalograms--conventional cephalometric analysis (CCA)--involves the calculation of linear distance measurements, angular measurements, area measurements, and ratios. Because shape information cannot be determined from these 'size-based' measurements, an increasing number of studies employ geometric morphometric tools in the cephalometric analysis of craniofacial morphology. Most of the discussions surrounding the appropriateness of CCA, Procrustes superimposition, Euclidean distance matrix analysis (EDMA), thin-plate spline analysis (TPS), finite element morphometry (FEM), elliptical Fourier functions (EFF), and medial axis analysis (MAA) have centred upon mathematical and statistical arguments. Surprisingly, little information is available to assist the orthodontist in the clinical relevance of each technique. This article evaluates the advantages and limitations of the above methods currently used to analyse the craniofacial morphology on cephalograms and investigates their clinical relevance and possible applications.

  18. Apparatus for in-situ nondestructive measurement of Young's modulus of plate structures

    NASA Technical Reports Server (NTRS)

    Huang, Jerry Qixin (Inventor); Perez, Robert J. (Inventor); DeLangis, Leo M. (Inventor)

    2005-01-01

    A method and apparatus for determining stiffness of a plate-like structure including a monolithic or composite laminate plate entails disposing a device for generating an acoustical pulse against a surface of the plate and disposing a detecting device against the same surface spaced a known distance from the pulse-generating device, and using the pulse-generating device to emit a pulse so as to create an extensional wave in the plate. The detecting device is used to determine a time of flight of the wave over the known distance, and the wave velocity is calculated. A Young's modulus of the plate is determined by a processor based on the wave velocity. Methods and apparatus for evaluating both isotropic plates and anisotropic laminates are disclosed.

  19. Calculated dipole moment and energy in collision of a hydrogen molecule and a hydrogen atom

    NASA Technical Reports Server (NTRS)

    Patch, R. W.

    1973-01-01

    Calculations were carried out using three Slater-type 1s orbitals in the orthogonalized valencebond theory of McWeeny. Each orbital exponent was optimized, the H2 internuclear distance was varied from 7.416 x 10 to the -11th power to 7.673 x 10 to the -11th power m (1.401 to 1.450 bohrs). The intermolecular distance was varied from 1 to 4 bohrs (0.5292 to 2.117 x 10 to the 10th power). Linear, scalene, and isosceles configurations were used. A weighted average of the interaction energies was taken for each intermolecular distance. Although energies are tabulated, the principal purpose was to calculate the electric dipole moment and its derivative with respect to H2 internuclear distance.

  20. Ground-state energy of HeH+

    NASA Astrophysics Data System (ADS)

    Zhou, Bing-Lu; Zhu, Jiong-Ming; Yan, Zong-Chao

    2006-06-01

    The nonrelativistic ground-state energy of He4H+ is calculated using a variational method in Hylleraas coordinates. Convergence to a few parts in 1010 is achieved, which improves the best previous result of Pavanello [J. Chem. Phys. 123, 104306 (2005)]. Expectation values of the interparticle distances are evaluated. Similar results for He3H+ are also presented.

  1. Dropout Rates, Student Momentum, and Course Walls: A New Tool for Distance Education Designers

    ERIC Educational Resources Information Center

    Christensen, Steven S.; Spackman, Jonathan S.

    2017-01-01

    This paper explores a new tool for instructional designers. By calculating and graphing the Student Momentum Indicator (M) for 196 university-level online courses and by employing the constant comparative method within the grounded theory framework, eight distinct graph shapes emerged as meaningful categories of dropout behavior. Several of the…

  2. An improved light microscopical histoquantitative method for the stereological analysis of the rat ventral prostate lobe.

    PubMed

    Romppanen, T; Huttunen, E; Helminen, H J

    1980-07-01

    An improved light microscopical histoquantitative method for the analysis of the stereologic structure of the ventral lobe of the rat prostate is introduced. From paraffin-embedded tissue sections, volumetric fractions of the acinar parenchyma, the glandular epithelium, the glandular lumen, and the interacinar tissue were determined. The surface density of the glandular epithelium and the length density of the glandular tubules per cubic millimeter of tissue were also calculated. The corresponding total amount/quantity of each tissue compartment was computed for the whole ventral lobe based on the weight of the lobe. Using established stereologic laws, the height of the epithelium, the diameter of the glandular tubules, the free distance between the glandular tubules, and the distance between the glandular centers (means) were determined. The fitness of the method was tested by analyzing, in addition to normal prostates, ventral prostates of rats castrated 30 days before sacrifice.

  3. Horizontal fields generated by return strokes

    NASA Technical Reports Server (NTRS)

    Cooray, Vernon

    1991-01-01

    Horizontal fields generated by return strokes play an important role in the interaction of lightning generated electric fields with power lines. In many of the recent investigations on the interaction of lightning electromagnetic fields with power lines, the horizontal field was calculated by employing the expression for the tilt of the electric field of a plane wave propagating over finitely conducting earth. The method is suitable for calculating horizontal fields generated by return strokes at distances as close as 200m. At these close ranges, the use of the wavetilt expression can cause large errors.

  4. Visualizing Similarity of Appearance by Arrangement of Cards

    PubMed Central

    Nakatsuji, Nao; Ihara, Hisayasu; Seno, Takeharu; Ito, Hiroshi

    2016-01-01

    This study proposes a novel method to extract the configuration of the psychological space by directly measuring subjects' similarity rating without computational work. Although multidimensional scaling (MDS) is well-known as a conventional method for extracting the psychological space, the method requires many pairwise evaluations. The times taken for evaluations increase in proportion to the square of the number of objects in MDS. The proposed method asks subjects to arrange cards on a poster sheet according to the degree of similarity of the objects. To compare the performance of the proposed method with the conventional one, we developed similarity maps of typefaces through the proposed method and through non-metric MDS. We calculated the trace correlation coefficient among all combinations of the configuration for both methods to evaluate the degree of similarity in the obtained configurations. The threshold value of trace correlation coefficient for statistically discriminating similar configuration was decided based on random data. The ratio of the trace correlation coefficient exceeding the threshold value was 62.0% so that the configurations of the typefaces obtained by the proposed method closely resembled those obtained by non-metric MDS. The required duration for the proposed method was approximately one third of the non-metric MDS's duration. In addition, all distances between objects in all the data for both methods were calculated. The frequency for the short distance in the proposed method was lower than that of the non-metric MDS so that a relatively small difference was likely to be emphasized among objects in the configuration by the proposed method. The card arrangement method we here propose, thus serves as a easier and time-saving tool to obtain psychological structures in the fields related to similarity of appearance. PMID:27242611

  5. Evaluation of Effective Factors on Travel Time in Optimization of Bus Stops Placement Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Bargegol, Iraj; Ghorbanzadeh, Mahyar; Ghasedi, Meisam; Rastbod, Mohammad

    2017-10-01

    In congested cities, locating and proper designing of bus stops according to the unequal distribution of passengers is crucial issue economically and functionally, since this subject plays an important role in the use of bus system by passengers. Location of bus stops is a complicated subject; by reducing distances between stops, walking time decreases, but the total travel time may increase. In this paper, a specified corridor in the city of Rasht in north of Iran is studied. Firstly, a new formula is presented to calculate the travel time, by which the number of stops and consequently, the travel time can be optimized. An intended corridor with specified number of stops and distances between them is addressed, the related formulas to travel time are created, and its travel time is calculated. Then the corridor is modelled using a meta-heuristic method in order that the placement and the optimal distances of bus stops for that are determined. It was found that alighting and boarding time along with bus capacity are the most effective factors affecting travel time. Consequently, it is better to have more concentration on indicated factors for improving the efficiency of bus system.

  6. Spiral arms and disc stability in the Andromeda galaxy

    NASA Astrophysics Data System (ADS)

    Tenjes, P.; Tuvikene, T.; Tamm, A.; Kipper, R.; Tempel, E.

    2017-04-01

    Aims: Density waves are often considered as the triggering mechanism of star formation in spiral galaxies. Our aim is to study relations between different star formation tracers (stellar UV and near-IR radiation and emission from H I, CO, and cold dust) in the spiral arms of M 31, to calculate stability conditions in the galaxy disc, and to draw conclusions about possible star formation triggering mechanisms. Methods: We selected fourteen spiral arm segments from the de-projected data maps and compared emission distributions along the cross sections of the segments in different datasets to each other, in order to detect spatial offsets between young stellar populations and the star-forming medium. By using the disc stability condition as a function of perturbation wavelength and distance from the galaxy centre, we calculated the effective disc stability parameters and the least stable wavelengths at different distances. For this we used a mass distribution model of M 31 with four disc components (old and young stellar discs, cold and warm gaseous discs) embedded within the external potential of the bulge, the stellar halo, and the dark matter halo. Each component is considered to have a realistic finite thickness. Results: No systematic offsets between the observed UV and CO/far-IR emission across the spiral segments are detected. The calculated effective stability parameter has a lowest value of Qeff ≃ 1.8 at galactocentric distances of 12-13 kpc. The least stable wavelengths are rather long, with the lowest values starting from ≃ 3 kpc at distances R > 11 kpc. Conclusions: The classical density wave theory is not a realistic explanation for the spiral structure of M 31. Instead, external causes should be considered, such as interactions with massive gas clouds or dwarf companions of M 31.

  7. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.

    2013-12-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  8. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  9. Calculation of heat transfer on shuttle type configurations including the effects of variable entropy at boundary layer edge

    NASA Technical Reports Server (NTRS)

    Dejarnette, F. R.

    1972-01-01

    A relatively simple method is presented for including the effect of variable entropy at the boundary-layer edge in a heat transfer method developed previously. For each inviscid surface streamline an approximate shockwave shape is calculated using a modified form of Maslen's method for inviscid axisymmetric flows. The entropy for the streamline at the edge of the boundary layer is determined by equating the mass flux through the shock wave to that inside the boundary layer. Approximations used in this technique allow the heating rates along each inviscid surface streamline to be calculated independent of the other streamlines. The shock standoff distances computed by the present method are found to compare well with those computed by Maslen's asymmetric method. Heating rates are presented for blunted circular and elliptical cones and a typical space shuttle orbiter at angles of attack. Variable entropy effects are found to increase heating rates downstream of the nose significantly higher than those computed using normal-shock entropy, and turbulent heating rates increased more than laminar rates. Effects of Reynolds number and angles of attack are also shown.

  10. SU-F-T-273: Using a Diode Array to Explore the Weakness of TPS DoseCalculation Algorithm for VMAT and Sliding Window Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J; Lu, B; Yan, G

    Purpose: To identify the weakness of dose calculation algorithm in a treatment planning system for volumetric modulated arc therapy (VMAT) and sliding window (SW) techniques using a two-dimensional diode array. Methods: The VMAT quality assurance(QA) was implemented with a diode array using multiple partial arcs that divided from a VMAT plan; each partial arc has the same segments and the original monitor units. Arc angles were less than ± 30°. Multiple arcs delivered through consecutive and repetitive gantry operating clockwise and counterclockwise. The source-toaxis distance setup with the effective depths of 10 and 20 cm were used for a diodemore » array. To figure out dose errors caused in delivery of VMAT fields, the numerous fields having the same segments with the VMAT field irradiated using different delivery techniques of static and step-and-shoot. The dose distributions of the SW technique were evaluated by creating split fields having fine moving steps of multi-leaf collimator leaves. Calculated doses using the adaptive convolution algorithm were analyzed with measured ones with distance-to-agreement and dose difference of 3 mm and 3%.. Results: While the beam delivery through static and step-and-shoot techniques showed the passing rate of 97 ± 2%, partial arc delivery of the VMAT fields brought out passing rate of 85%. However, when leaf motion was restricted less than 4.6 mm/°, passing rate was improved up to 95 ± 2%. Similar passing rate were obtained for both 10 and 20 cm effective depth setup. The calculated doses using the SW technique showed the dose difference over 7% at the final arrival point of moving leaves. Conclusion: Error components in dynamic delivery of modulated beams were distinguished by using the suggested QA method. This partial arc method can be used for routine VMAT QA. Improved SW calculation algorithm is required to provide accurate estimated doses.« less

  11. Cluster analysis of European Y-chromosomal STR haplotypes using the discrete Laplace method.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2014-07-01

    The European Y-chromosomal short tandem repeat (STR) haplotype distribution has previously been analysed in various ways. Here, we introduce a new way of analysing population substructure using a new method based on clustering within the discrete Laplace exponential family that models the probability distribution of the Y-STR haplotypes. Creating a consistent statistical model of the haplotypes enables us to perform a wide range of analyses. Previously, haplotype frequency estimation using the discrete Laplace method has been validated. In this paper we investigate how the discrete Laplace method can be used for cluster analysis to further validate the discrete Laplace method. A very important practical fact is that the calculations can be performed on a normal computer. We identified two sub-clusters of the Eastern and Western European Y-STR haplotypes similar to results of previous studies. We also compared pairwise distances (between geographically separated samples) with those obtained using the AMOVA method and found good agreement. Further analyses that are impossible with AMOVA were made using the discrete Laplace method: analysis of the homogeneity in two different ways and calculating marginal STR distributions. We found that the Y-STR haplotypes from e.g. Finland were relatively homogeneous as opposed to the relatively heterogeneous Y-STR haplotypes from e.g. Lublin, Eastern Poland and Berlin, Germany. We demonstrated that the observed distributions of alleles at each locus were similar to the expected ones. We also compared pairwise distances between geographically separated samples from Africa with those obtained using the AMOVA method and found good agreement. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Effect of soldering techniques and gap distance on tensile strength of soldered Ni-Cr alloy joint

    PubMed Central

    Lee, Sang-Yeob

    2010-01-01

    PURPOSE The present study was intended to evaluate the effect of soldering techniques with infrared ray and gas torch under different gap distances (0.3 mm and 0.5 mm) on the tensile strength and surface porosity formation in Ni-Cr base metal alloy. MATERIALS AND METHODS Thirty five dumbbell shaped Ni-Cr alloy specimens were prepared and assigned to 5 groups according to the soldering method and the gap distance. For the soldering methods, gas torch (G group) and infrared ray (IR group) were compared and each group was subdivided by corresponding gap distance (0.3 mm: G3 and IR3, 0.5 mm: G5, IR5). Specimens of the experimental groups were sectioned in the middle with a diamond disk and embedded in solder blocks according to the predetermined distance. As a control group, 7 specimens were prepared without sectioning or soldering. After the soldering procedure, a tensile strength test was performed using universal testing machine at a crosshead speed 1 mm/min. The proportions of porosity on the fractured surface were calculated on the images acquired through the scanning electronic microscope. RESULTS Every specimen of G3, G5, IR3 and IR5 was fractured on the solder joint area. However, there was no significant difference between the test groups (P > .05). There was a negative correlation between porosity formation and tensile strength in all the specimens in the test groups (P < .05). CONCLUSION There was no significant difference in ultimate tensile strength of joints and porosity formations between the gas-oxygen torch soldering and infrared ray soldering technique or between the gap distance of 0.3 mm and 0.5 mm. PMID:21264189

  13. Plasma characteristics in the discharge region of a 20 A emission current hollow cathode

    NASA Astrophysics Data System (ADS)

    Mingming, SUN; Tianping, ZHANG; Xiaodong, WEN; Weilong, GUO; Jiayao, SONG

    2018-02-01

    Numerical calculation and fluid simulation methods were used to obtain the plasma characteristics in the discharge region of the LIPS-300 ion thruster’s 20 A emission current hollow cathode and to verify the structural design of the emitter. The results of the two methods indicated that the highest plasma density and electron temperature, which improved significantly in the orifice region, were located in the discharge region of the hollow cathode. The magnitude of plasma density was about 1021 m-3 in the emitter and orifice regions, as obtained by numerical calculations, but decreased exponentially in the plume region with the distance from the orifice exit. Meanwhile, compared to the emitter region, the electron temperature and current improved by about 36% in the orifice region. The hollow cathode performance test results were in good agreement with the numerical calculation results, which proved that that the structural design of the emitter and the orifice met the requirements of a 20 A emission current. The numerical calculation method can be used to estimate plasma characteristics in the preliminary design stage of hollow cathodes.

  14. Measuring genetic distances between breeds: use of some distances in various short term evolution models

    PubMed Central

    Laval, Guillaume; SanCristobal, Magali; Chevalet, Claude

    2002-01-01

    Many works demonstrate the benefits of using highly polymorphic markers such as microsatellites in order to measure the genetic diversity between closely related breeds. But it is sometimes difficult to decide which genetic distance should be used. In this paper we review the behaviour of the main distances encountered in the literature in various divergence models. In the first part, we consider that breeds are populations in which the assumption of equilibrium between drift and mutation is verified. In this case some interesting distances can be expressed as a function of divergence time, t, and therefore can be used to construct phylogenies. Distances based on allele size distribution (such as (δμ)2 and derived distances), taking a mutation model of microsatellites, the Stepwise Mutation Model, specifically into account, exhibit large variance and therefore should not be used to accurately infer phylogeny of closely related breeds. In the last section, we will consider that breeds are small populations and that the divergence times between them are too small to consider that the observed diversity is due to mutations: divergence is mainly due to genetic drift. Expectation and variance of distances were calculated as a function of the Wright-Malécot inbreeding coefficient, F. Computer simulations performed under this divergence model show that the Reynolds distance [57]is the best method for very closely related breeds. PMID:12270106

  15. Automatic measurement for dimensional changes of woven fabrics based on texture

    NASA Astrophysics Data System (ADS)

    Liu, Jihong; Jiang, Hongxia; Liu, X.; Chai, Zhilei

    2014-01-01

    Dimensional change or shrinkage is an important functional attribute of woven fabrics that affects their basic function and price in the market. This paper presents a machine vision system that evaluates the shrinkage of woven fabrics by analyzing the change of fabric construction. The proposed measurement method has three features. (i) There will be no stain of shrinkage markers on the fabric specimen compared to the existing measurement method. (ii) The system can be used on fabric with reduced area. (iii) The system can be installed and used as a laboratory or industrial application system. The method processed can process the image of the fabric and is divided into four steps: acquiring a relative image from the sample of the woven fabric, obtaining a gray image and then the segmentation of the warp and weft from the fabric based on fast Fourier transform and inverse fast Fourier transform, calculation of the distance of the warp or weft sets by gray projection method and character shrinkage of the woven fabric by the average distance, coefficient of variation of distance and so on. Experimental results on virtual and physical woven fabrics indicated that the method provided could obtain the shrinkage information of woven fabric in detail. The method was programmed by Matlab software, and a graphical user interface was built by Delphi. The program has potential for practical use in the textile industry.

  16. Realizing privacy preserving genome-wide association studies.

    PubMed

    Simmons, Sean; Berger, Bonnie

    2016-05-01

    As genomics moves into the clinic, there has been much interest in using this medical data for research. At the same time the use of such data raises many privacy concerns. These circumstances have led to the development of various methods to perform genome-wide association studies (GWAS) on patient records while ensuring privacy. In particular, there has been growing interest in applying differentially private techniques to this challenge. Unfortunately, up until now all methods for finding high scoring SNPs in a differentially private manner have had major drawbacks in terms of either accuracy or computational efficiency. Here we overcome these limitations with a substantially modified version of the neighbor distance method for performing differentially private GWAS, and thus are able to produce a more viable mechanism. Specifically, we use input perturbation and an adaptive boundary method to overcome accuracy issues. We also design and implement a convex analysis based algorithm to calculate the neighbor distance for each SNP in constant time, overcoming the major computational bottleneck in the neighbor distance method. It is our hope that methods such as ours will pave the way for more widespread use of patient data in biomedical research. A python implementation is available at http://groups.csail.mit.edu/cb/DiffPriv/ bab@csail.mit.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  17. Characterization of Diffusion Metric Map Similarity in Data From a Clinical Data Repository Using Histogram Distances

    PubMed Central

    Warner, Graham C.; Helmer, Karl G.

    2018-01-01

    As the sharing of data is mandated by funding agencies and journals, reuse of data has become more prevalent. It becomes imperative, therefore, to develop methods to characterize the similarity of data. While users can group data based on the acquisition parameters stored in the file headers, these gives no indication whether a file can be combined with other data without increasing the variance in the data set. Methods have been implemented that characterize the signal-to-noise ratio or identify signal drop-outs in the raw image files, but potential users of data often have access to calculated metric maps and these are more difficult to characterize and compare. Here we describe a histogram-distance-based method applied to diffusion metric maps of fractional anisotropy and mean diffusivity that were generated using data extracted from a repository of clinically-acquired MRI data. We describe the generation of the data set, the pitfalls specific to diffusion MRI data, and the results of the histogram distance analysis. We find that, in general, data from GE scanners are less similar than are data from Siemens scanners. We also find that the distribution of distance metric values is not Gaussian at any selection of the acquisition parameters considered here (field strength, number of gradient directions, b-value, and vendor). PMID:29568257

  18. Learning a Mahalanobis Distance-Based Dynamic Time Warping Measure for Multivariate Time Series Classification.

    PubMed

    Mei, Jiangyuan; Liu, Meizhu; Wang, Yuan-Fang; Gao, Huijun

    2016-06-01

    Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.

  19. Web page sorting algorithm based on query keyword distance relation

    NASA Astrophysics Data System (ADS)

    Yang, Han; Cui, Hong Gang; Tang, Hao

    2017-08-01

    In order to optimize the problem of page sorting, according to the search keywords in the web page in the relationship between the characteristics of the proposed query keywords clustering ideas. And it is converted into the degree of aggregation of the search keywords in the web page. Based on the PageRank algorithm, the clustering degree factor of the query keyword is added to make it possible to participate in the quantitative calculation. This paper proposes an improved algorithm for PageRank based on the distance relation between search keywords. The experimental results show the feasibility and effectiveness of the method.

  20. A low-cost method for estimating energy expenditure during soccer refereeing.

    PubMed

    Ardigò, Luca Paolo; Padulo, Johnny; Zuliani, Andrea; Capelli, Carlo

    2015-01-01

    This study aimed to apply a validated bioenergetics model of sprint running to recordings obtained from commercial basic high-sensitivity global positioning system receivers to estimate energy expenditure and physical activity variables during soccer refereeing. We studied five Italian fifth division referees during 20 official matches while carrying the receivers. By applying the model to the recorded speed and acceleration data, we calculated energy consumption during activity, mass-normalised total energy consumption, total distance, metabolically equivalent distance and their ratio over the entire match and the two halves. Main results were as follows: (match) energy consumption = 4729 ± 608 kJ, mass normalised total energy consumption = 74 ± 8 kJ · kg(-1), total distance = 13,112 ± 1225 m, metabolically equivalent distance = 13,788 ± 1151 m and metabolically equivalent/total distance = 1.05 ± 0.05. By using a very low-cost device, it is possible to estimate the energy expenditure of soccer refereeing. The provided predicting mass-normalised total energy consumption versus total distance equation can supply information about soccer refereeing energy demand.

  1. [Local Regression Algorithm Based on Net Analyte Signal and Its Application in Near Infrared Spectral Analysis].

    PubMed

    Zhang, Hong-guang; Lu, Jian-gang

    2016-02-01

    Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.

  2. SU-F-P-44: A Direct Estimate of Peak Skin Dose for Interventional Fluoroscopy Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weir, V; Zhang, J

    Purpose: There is an increasing demand for medical physicist to calculate peak skin dose (PSD) for interventional fluoroscopy procedures. The dose information (Dose-Area-Product and Air Kerma) displayed in the console cannot directly be used for this purpose. Our clinical experience shows that the use of the existing methods may overestimate or underestimate PSD. This study attempts to develop a direct estimate of PSD from the displayed dose metrics. Methods: An anthropomorphic torso phantom was used for dose measurements for a common fluoroscopic procedure. Entrance skin doses were measured with a Piranha solid state point detector placed on the table surfacemore » below the torso phantom. An initial “reference dose rate” (RE) measurement was conducted by comparing the displayed dose rate (mGy/min) to the dose rate measured. The distance from table top to focal spot was taken as the reference distance (RD at the RE. Table height was then adjusted. The displayed air kerma and DAP were recorded and sent to three physicists to estimate PSD. An inverse square correction was applied to correct displayed air kerma at various table heights. The PSD estimated by physicists and the PSD by the proposed method were then compared with the measurements. The estimated DAPs were compared to displayed DAP readings (mGycm2). Results: The difference between estimated PSD by the proposed method and direct measurements was less than 5%. For the same set of data, the estimated PSD by each of three physicists is different from measurements by ±52%. The DAP calculated by the proposed method and displayed DAP readings in the console is less than 20% at various table heights. Conclusion: PSD may be simply estimated from displayed air kerma or DAP if the distance between table top and tube focal spot or if x-ray beam area on table top is available.« less

  3. Research of image retrieval technology based on color feature

    NASA Astrophysics Data System (ADS)

    Fu, Yanjun; Jiang, Guangyu; Chen, Fengying

    2009-10-01

    Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram make rotating and translation does not change. The HSV color space is used to show color characteristic of image, which is suitable to the visual characteristic of human. Taking advance of human's feeling to color, it quantifies color sector with unequal interval, and get characteristic vector. Finally, it matches the similarity of image with the algorithm of the histogram intersection and the partition-overall histogram. Users can choose a demonstration image to show inquired vision require, and also can adjust several right value through the relevance-feedback method to obtain the best result of search.An image retrieval system based on these approaches is presented. The result of the experiments shows that the image retrieval based on partition-overall histogram can keep the space distribution information while abstracting color feature efficiently, and it is superior to the normal color histograms in precision rate while researching. The query precision rate is more than 95%. In addition, the efficient block expression will lower the complicate degree of the images to be searched, and thus the searching efficiency will be increased. The image retrieval algorithms based on the partition-overall histogram proposed in the paper is efficient and effective.

  4. Association between neighborhood need and spatial access to food stores and fast food restaurants in neighborhoods of colonias.

    PubMed

    Sharkey, Joseph R; Horel, Scott; Han, Daikwon; Huber, John C

    2009-02-16

    To determine the extent to which neighborhood needs (socioeconomic deprivation and vehicle availability) are associated with two criteria of food environment access: 1) distance to the nearest food store and fast food restaurant and 2) coverage (number) of food stores and fast food restaurants within a specified network distance of neighborhood areas of colonias, using ground-truthed methods. Data included locational points for 315 food stores and 204 fast food restaurants, and neighborhood characteristics from the 2000 U.S. Census for the 197 census block group (CBG) study area. Neighborhood deprivation and vehicle availability were calculated for each CBG. Minimum distance was determined by calculating network distance from the population-weighted center of each CBG to the nearest supercenter, supermarket, grocery, convenience store, dollar store, mass merchandiser, and fast food restaurant. Coverage was determined by calculating the number of each type of food store and fast food restaurant within a network distance of 1, 3, and 5 miles of each population-weighted CBG center. Neighborhood need and access were examined using Spearman ranked correlations, spatial autocorrelation, and multivariate regression models that adjusted for population density. Overall, neighborhoods had best access to convenience stores, fast food restaurants, and dollar stores. After adjusting for population density, residents in neighborhoods with increased deprivation had to travel a significantly greater distance to the nearest supercenter or supermarket, grocery store, mass merchandiser, dollar store, and pharmacy for food items. The results were quite different for association of need with the number of stores within 1 mile. Deprivation was only associated with fast food restaurants; greater deprivation was associated with fewer fast food restaurants within 1 mile. CBG with greater lack of vehicle availability had slightly better access to more supercenters or supermarkets, grocery stores, or fast food restaurants. Increasing deprivation was associated with decreasing numbers of grocery stores, mass merchandisers, dollar stores, and fast food restaurants within 3 miles. It is important to understand not only the distance that people must travel to the nearest store to make a purchase, but also how many shopping opportunities they have in order to compare price, quality, and selection. Future research should examine how spatial access to the food environment influences the utilization of food stores and fast food restaurants, and the strategies used by low-income families to obtain food for the household.

  5. Ion-dipole interactions in concentrated organic electrolytes.

    PubMed

    Chagnes, Alexandre; Nicolis, Stamatios; Carré, Bernard; Willmann, Patrick; Lemordant, Daniel

    2003-06-16

    An algorithm is proposed for calculating the energy of ion-dipole interactions in concentrated organic electrolytes. The ion-dipole interactions increase with increasing salt concentration and must be taken into account when the activation energy for the conductivity is calculated. In this case, the contribution of ion-dipole interactions to the activation energy for this transport process is of the same order of magnitude as the contribution of ion-ion interactions. The ion-dipole interaction energy was calculated for a cell of eight ions, alternatingly anions and cations, placed on the vertices of an expanded cubic lattice whose parameter is related to the mean interionic distance (pseudolattice theory). The solvent dipoles were introduced randomly into the cell by assuming a randomness compacity of 0.58. The energy of the dipole assembly in the cell was minimized by using a Newton-Raphson numerical method. The dielectric field gradient around ions was taken into account by a distance parameter and a dielectric constant of epsilon = 3 at the surfaces of the ions. A fair agreement between experimental and calculated activation energy has been found for systems composed of gamma-butyrolactone (BL) as solvent and lithium perchlorate (LiClO4), lithium tetrafluoroborate (LiBF4), lithium hexafluorophosphate (LiPF6), lithium hexafluoroarsenate (LiAsF6), and lithium bis(trifluoromethylsulfonyl)imide (LiTFSI) as salts.

  6. Off-axis phase-only holograms of 3D objects using accelerated point-based Fresnel diffraction algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Zhenxiang; Zheng, Huadong; Yu, Yingjie; Asundi, Anand K.

    2017-06-01

    A method for calculating off-axis phase-only holograms of three-dimensional (3D) object using accelerated point-based Fresnel diffraction algorithm (PB-FDA) is proposed. The complex amplitude of the object points on the z-axis in hologram plane is calculated using Fresnel diffraction formula, called principal complex amplitudes (PCAs). The complex amplitudes of those off-axis object points of the same depth can be obtained by 2D shifting of PCAs. In order to improve the calculating speed of the PB-FDA, the convolution operation based on fast Fourier transform (FFT) is used to calculate the holograms rather than using the point-by-point spatial 2D shifting of the PCAs. The shortest recording distance of the PB-FDA is analyzed in order to remove the influence of multiple-order images in reconstructed images. The optimal recording distance of the PB-FDA is also analyzed to improve the quality of reconstructed images. Numerical reconstructions and optical reconstructions with a phase-only spatial light modulator (SLM) show that holographic 3D display is feasible with the proposed algorithm. The proposed PB-FDA can also avoid the influence of the zero-order image introduced by SLM in optical reconstructed images.

  7. Hot spots based gold nanostar@SiO2@CdSe/ZnS quantum dots complex with strong fluorescence enhancement

    NASA Astrophysics Data System (ADS)

    Shan, Feng; Su, Dan; Li, Wei; Hu, Wei; Zhang, Tong

    2018-02-01

    In this paper, a novel gold nanostar (NS)@SiO2@CdSe/ZnS quantum dots (QDs) complex with plasmon-enhanced fluorescence synthesized using a step-by-step surface linkage method was presented. The gold NS was synthesized by the seed growth method. The synthesized gold NS with the apexes structure has a hot-spot effect due to the strong electric field distributed at its sharp apexes, which leads to a plasmon resonance enhancement. Because the distance between QDs and metal nanostructures can be precisely controlled by this method, the relationship between enhancement and distance was revealed. The thickness of SiO2 shell was also optimized and the optimum distance of about 21 nm was obtained. The highest fluorescence enhancement of 4.8-fold accompanied by a minimum fluorescence lifetime of 2.3 ns were achieved. This strong enhancement comes from the hot spots distributed at the sharp tip of our constructed nanostructure. Through the finite element method, we calculated the field distribution on the surface of NS and found that gold NS with the sharpest apexes exhibited the highest field enhancement, which matches well with our experiment result. This complex shows tremendous potential applications for liquid-dependent biometric imaging systems.

  8. Detection of resting state functional connectivity using partial correlation analysis: A study using multi-distance and whole-head probe near-infrared spectroscopy.

    PubMed

    Sakakibara, Eisuke; Homae, Fumitaka; Kawasaki, Shingo; Nishimura, Yukika; Takizawa, Ryu; Koike, Shinsuke; Kinoshita, Akihide; Sakurada, Hanako; Yamagishi, Mika; Nishimura, Fumichika; Yoshikawa, Akane; Inai, Aya; Nishioka, Masaki; Eriguchi, Yosuke; Matsuoka, Jun; Satomura, Yoshihiro; Okada, Naohiro; Kakiuchi, Chihiro; Araki, Tsuyoshi; Kan, Chiemi; Umeda, Maki; Shimazu, Akihito; Uga, Minako; Dan, Ippeita; Hashimoto, Hideki; Kawakami, Norito; Kasai, Kiyoto

    2016-11-15

    Multichannel near-infrared spectroscopy (NIRS) is a functional neuroimaging modality that enables easy-to-use and noninvasive measurement of changes in blood oxygenation levels. We developed a clinically-applicable method for estimating resting state functional connectivity (RSFC) with NIRS using a partial correlation analysis to reduce the influence of extraneural components. Using a multi-distance probe arrangement NIRS, we measured resting state brain activity for 8min in 17 healthy participants. Independent component analysis was used to extract shallow and deep signals from the original NIRS data. Pearson's correlation calculated from original signals was significantly higher than that calculated from deep signals, while partial correlation calculated from original signals was comparable to that calculated from deep (cerebral-tissue) signals alone. To further test the validity of our method, we also measured 8min of resting state brain activity using a whole-head NIRS arrangement consisting of 17 cortical regions in 80 healthy participants. Significant RSFC between neighboring, interhemispheric homologous, and some distant ipsilateral brain region pairs was revealed. Additionally, females exhibited higher RSFC between interhemispheric occipital region-pairs, in addition to higher connectivity between some ipsilateral pairs in the left hemisphere, when compared to males. The combined results of the two component experiments indicate that partial correlation analysis is effective in reducing the influence of extracerebral signals, and that NIRS is able to detect well-described resting state networks and sex-related differences in RSFC. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    DOE PAGES

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less

  10. Intra-temporal facial nerve centerline segmentation for navigated temporal bone surgery

    NASA Astrophysics Data System (ADS)

    Voormolen, Eduard H. J.; van Stralen, Marijn; Woerdeman, Peter A.; Pluim, Josien P. W.; Noordmans, Herke J.; Regli, Luca; Berkelbach van der Sprenkel, Jan W.; Viergever, Max A.

    2011-03-01

    Approaches through the temporal bone require surgeons to drill away bone to expose a target skull base lesion while evading vital structures contained within it, such as the sigmoid sinus, jugular bulb, and facial nerve. We hypothesize that an augmented neuronavigation system that continuously calculates the distance to these structures and warns if the surgeon drills too close, will aid in making safe surgical approaches. Contemporary image guidance systems are lacking an automated method to segment the inhomogeneous and complexly curved facial nerve. Therefore, we developed a segmentation method to delineate the intra-temporal facial nerve centerline from clinically available temporal bone CT images semi-automatically. Our method requires the user to provide the start- and end-point of the facial nerve in a patient's CT scan, after which it iteratively matches an active appearance model based on the shape and texture of forty facial nerves. Its performance was evaluated on 20 patients by comparison to our gold standard: manually segmented facial nerve centerlines. Our segmentation method delineates facial nerve centerlines with a maximum error along its whole trajectory of 0.40+/-0.20 mm (mean+/-standard deviation). These results demonstrate that our model-based segmentation method can robustly segment facial nerve centerlines. Next, we can investigate whether integration of this automated facial nerve delineation with a distance calculating neuronavigation interface results in a system that can adequately warn surgeons during temporal bone drilling, and effectively diminishes risks of iatrogenic facial nerve palsy.

  11. Application of agglomerative clustering for analyzing phylogenetically on bacterium of saliva

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Fitria, I.; Umam, K.

    2017-07-01

    Analyzing population of Streptococcus bacteria is important since these species can cause dental caries, periodontal, halitosis (bad breath) and more problems. This paper will discuss the phylogenetically relation between the bacterium Streptococcus in saliva using a phylogenetic tree of agglomerative clustering methods. Starting with the bacterium Streptococcus DNA sequence obtained from the GenBank, then performed characteristic extraction of DNA sequences. The characteristic extraction result is matrix form, then performed normalization using min-max normalization and calculate genetic distance using Manhattan distance. Agglomerative clustering technique consisting of single linkage, complete linkage and average linkage. In this agglomerative algorithm number of group is started with the number of individual species. The most similar species is grouped until the similarity decreases and then formed a single group. Results of grouping is a phylogenetic tree and branches that join an established level of distance, that the smaller the distance the more the similarity of the larger species implementation is using R, an open source program.

  12. A revised catalog of CfA galaxy groups in the Virgo/Great Attractor flow field

    NASA Technical Reports Server (NTRS)

    Nolthenius, Richard

    1993-01-01

    A new identification of groups and clusters in the CfAl Catalog of Huchra, et al. (1983) is presented, using a percolation algorithm to identify density enhancements. The procedure differs from that of the original Geller and Huchra (1983; GH) catalog in several important respects; galaxy distances are calculated from the Virgo-Great Attractor flow model of Faber and Burnstein (1988), the adopted distance linkage criteria is only approx. 1/4 as large as in the Geller and Huchra catalog, the sky link relation is taken from Nolthenius and White (1987), correction for interstellar extinction is included, and 'by-hand' adjustments to group memberships are made in the complex regions of Virgo/Coma I/Ursa Major and Coma/A1367 (to allow for varying group velocity dispersions and to trim unphysical 'spider arms'). Since flow model distances are poorly determined in these same regions, available distances from the IR Tully-Fisher planetary nebula luminosity function and surface brightness resolution methods are adopted if possible.

  13. The determination of pair-distance distribution by double electron-electron resonance: regularization by the length of distance discretization with Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Dzuba, Sergei A.

    2016-08-01

    Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.

  14. Copernican Mathematics: Calculating Periods and Distances of the Planets

    ERIC Educational Resources Information Center

    Rosenkrantz, Kurt J.

    2004-01-01

    The heliocentric, or Sun-centered model, one of the most important revolutions in scientific thinking, allowed Nicholas Copernicus to calculate the periods, relative distances, and approximate orbital shapes of all the known planets, thereby paving the way for Kepler's laws and Newton's formation of gravitation. Recreating Copernicus's…

  15. Mean lives of the 5.106 and 5.834 MeV levels of 14N

    NASA Astrophysics Data System (ADS)

    Bhalla, R. K.; Poletti, A. R.

    1982-12-01

    The recoil distance method (RDM) has been used to measure the mean lives of the 5.106 and 5.834 MeV levels of 14N as τ = 6.27±0.10 ps and τ = 11.88±0.24 ps respectively. The results are compared to previous measurements and to shell-model calculations.

  16. Evolution of the orbit of asteroid 4179 Toutatis over 11,550 years.

    NASA Astrophysics Data System (ADS)

    Zausaev, A. F.; Pushkarev, A. N.

    1994-05-01

    The Everhart method is used to study evolution of the orbit of the asteroid 4179 Toutatis, a member of the Apollo group, over the time period 9300 B.C. to 2250 A.D. Minimum asteroid-Earth distances during the evolution process are calculated. It is shown that the asteroid presents no danger to the Earth over the interval studied.

  17. Non-LTE model calculations for SN 1987A and the extragalactic distance scale

    NASA Technical Reports Server (NTRS)

    Schmutz, W.; Abbott, D. C.; Russell, R. S.; Hamann, W.-R.; Wessolowski, U.

    1990-01-01

    This paper presents model atmospheres for the first week of SN 1987A, based on the luminosity and density/velocity structure from hydrodynamic models of Woosley (1988). The models account for line blanketing, expansion, sphericity, and departures from LTE in hydrogen and helium and differ from previously published efforts because they represent ab initio calculations, i.e., they contain essentially no free parameters. The formation of the UV spectrum is dominated by the effects of line blanketing. In the absorption troughs, the Balmer line profiles were fit well by these models, but the observed emissions are significantly stronger than predicted, perhaps due to clumping. The generally good agreement between the present synthetic spectra and observations provides independent support for the overall accuracy of the hydrodynamic models of Woosley. The question of the accuracy of the Baade-Wesselink method is addressed in a detailed discussion of its approximations. While the application of the standard method produces a distance within an uncertainty of 20 percent in the case of SN 1987A, systematic errors up to a factor of 2 are possible, particularly if the precursor was a red supergiant.

  18. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  19. Carbon footprint of patient journeys through primary care: a mixed methods approach

    PubMed Central

    Andrews, Elizabeth; Pearson, David; Kelly, Charlotte; Stroud, Laura; Rivas Perez, Martin

    2013-01-01

    Background The NHS has a target of cutting its carbon dioxide (CO2) emissions by 80% below 1990 levels by 2050. Travel comprises 17% of the NHS carbon footprint. This carbon footprint represents the total CO2 emissions caused directly or indirectly by the NHS. Patient journeys have previously been planned largely without regard to the environmental impact. The potential contribution of ‘avoidable’ journeys in primary care is significant. Aim To investigate the carbon footprint of patients travelling to and from a general practice surgery, the issues involved, and potential solutions for reducing patient travel. Design and setting A mixed methods study in a medium-sized practice in Yorkshire. Method During March 2012, 306 patients completed a travel survey. GIS maps of patients’ travel (modes and distances) were produced. Two focus groups (12 clinical and 13 non-clinical staff) were recorded, transcribed, and analysed using a thematic framework approach. Results The majority (61%) of patient journeys to and from the surgery were made by car or taxi; main reasons cited were ‘convenience’, ‘time saving’, and ‘no alternative’ for accessing the surgery. Using distances calculated via ArcGIS, the annual estimated CO2 equivalent carbon emissions for the practice totalled approximately 63 tonnes. Predominant themes from interviews related to issues with systems for booking appointments and repeat prescriptions; alternative travel modes; delivering health care; and solutions to reducing travel. Conclusion The modes and distances of patient travel can be accurately determined and allow appropriate carbon emission calculations for GP practices. Although challenging, there is scope for identifying potential solutions (for example, modifying administration systems and promoting walking) to reduce ‘avoidable’ journeys and cut carbon emissions while maintaining access to health care. PMID:23998839

  20. Energy hyperspace for stacking interaction in AU/AU dinucleotide step: Dispersion-corrected density functional theory study.

    PubMed

    Mukherjee, Sanchita; Kailasam, Senthilkumar; Bansal, Manju; Bhattacharyya, Dhananjay

    2014-01-01

    Double helical structures of DNA and RNA are mostly determined by base pair stacking interactions, which give them the base sequence-directed features, such as small roll values for the purine-pyrimidine steps. Earlier attempts to characterize stacking interactions were mostly restricted to calculations on fiber diffraction geometries or optimized structure using ab initio calculations lacking variation in geometry to comment on rather unusual large roll values observed in AU/AU base pair step in crystal structures of RNA double helices. We have generated stacking energy hyperspace by modeling geometries with variations along the important degrees of freedom, roll, and slide, which were chosen via statistical analysis as maximally sequence dependent. Corresponding energy contours were constructed by several quantum chemical methods including dispersion corrections. This analysis established the most suitable methods for stacked base pair systems despite the limitation imparted by number of atom in a base pair step to employ very high level of theory. All the methods predict negative roll value and near-zero slide to be most favorable for the purine-pyrimidine steps, in agreement with Calladine's steric clash based rule. Successive base pairs in RNA are always linked by sugar-phosphate backbone with C3'-endo sugars and this demands C1'-C1' distance of about 5.4 Å along the chains. Consideration of an energy penalty term for deviation of C1'-C1' distance from the mean value, to the recent DFT-D functionals, specifically ωB97X-D appears to predict reliable energy contour for AU/AU step. Such distance-based penalty improves energy contours for the other purine-pyrimidine sequences also. © 2013 Wiley Periodicals, Inc. Biopolymers 101: 107-120, 2014. Copyright © 2013 Wiley Periodicals, Inc.

  1. Analysis of k-means clustering approach on the breast cancer Wisconsin dataset.

    PubMed

    Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal

    2016-11-01

    Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2-9), and iteration (4-10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.

  2. Prognostics Health Management Model for LED Package Failure Under Contaminated Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lall, Pradeep; Zhang, Hao; Davis, J Lynn

    2015-06-06

    The reliability consideration of LED products includes both luminous flux drop and color shift. Previous research either talks about luminous maintenance or color shift, because luminous flux degradation usually takes very long time to observe. In this paper, the impact of a VOC (volatile organic compound) contaminated luminous flux and color stability are examined. As a result, both luminous degradation and color shift had been recorded in a short time. Test samples are white, phosphor-converted, high-power LED packages. Absolute radiant flux is measured with integrating sphere system to calculate the luminous flux. Luminous flux degradation and color shift distance weremore » plotted versus aging time to show the degradation pattern. A prognostic health management (PHM) method based on the state variables and state estimator have been proposed in this paper. In this PHM framework, unscented kalman filter (UKF) was deployed as the carrier of all states. During the estimation process, third order dynamic transfer function was used to implement the PHM framework. Both of the luminous flux and color shift distance have been used as the state variable with the same PHM framework to exam the robustness of the method. Predicted remaining useful life is calculated at every measurement point to compare with the tested remaining useful life. The result shows that state estimator can be used as the method for the PHM of LED degradation with respect to both luminous flux and color shift distance. The prediction of remaining useful life of LED package, made by the states estimator and data driven approach, falls in the acceptable error-bounds (20%) after a short training of the estimator.« less

  3. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward infinity, while the Hill sphere method results in a severely underestimated probability. We provide a discussion of the reasons for these differences, and we finally present the results of the MOID method in the form of probability maps for the Earth and Mars on their current orbits. These maps show a relatively flat probability distribution, except for the occurrence of two ridges found at small inclinations and for coinciding projectile/target perihelion distances. Conclusions: Our results verify the standard formulae in the general case, away from the singularities. In fact, severe shortcomings are limited to the immediate vicinity of those extreme orbits. On the other hand, the new Monte Carlo methods can be used without excessive consumption of computer time, and the MOID method avoids the problems associated with the other methods. Appendices are available in electronic form at http://www.aanda.org

  4. On the orbit calculation of visual binaries with a very short arc: application to the PMS binary system, FW Tau AB

    NASA Astrophysics Data System (ADS)

    Docobo, J. A.; Tamazian, V. S.; Campo, P. P.

    2018-05-01

    In the vast majority of cases when available astrometric measurements of a visual binary cover a very short orbital arc, it is practically impossible to calculate a good quality orbit. It is especially important for systems with pre-main-sequence components where standard mass-spectrum calibrations cannot be applied nor can a dynamical parallax be calculated. We have shown that the analytical method of Docobo allows us to put certain constraints on the most likely orbital solutions, using an available realistic estimate of the global mass of the system. As an example, we studied the interesting PMS binary, FW Tau AB, located in the Taurus-Auriga as well as investigated a range of its possible orbital solutions combined with an assumed distance between 120 and 160 pc. To maintain the total mass of FW Tau AB in a realistic range between 0.2 and 0.6M_{⊙}, minimal orbital periods should begin at 105, 150, 335, and 2300 yr for distances of 120, 130, 140, and 150 pc, respectively (no plausible orbits were found assuming a distance of 160 pc). An original criterion to establish the upper limit of the orbital period is applied. When the position angle in some astrometric measurements was flipped by 180°, orbits with periods close to 45 yr are also plausible. Three example orbits with periods of 44.6, 180, and 310 yr are presented.

  5. Seamless image stitching by homography refinement and structure deformation using optimal seam pair detection

    NASA Astrophysics Data System (ADS)

    Lee, Daeho; Lee, Seohyung

    2017-11-01

    We propose an image stitching method that can remove ghost effects and realign the structure misalignments that occur in common image stitching methods. To reduce the artifacts caused by different parallaxes, an optimal seam pair is selected by comparing the cross correlations from multiple seams detected by variable cost weights. Along the optimal seam pair, a histogram of oriented gradients is calculated, and feature points for matching are detected. The homography is refined using the matching points, and the remaining misalignment is eliminated using the propagation of deformation vectors calculated from matching points. In multiband blending, the overlapping regions are determined from a distance between the matching points to remove overlapping artifacts. The experimental results show that the proposed method more robustly eliminates misalignments and overlapping artifacts than the existing method that uses single seam detection and gradient features.

  6. Aggregation Number in Water/n-Hexanol Molecular Clusters Formed in Cyclohexane at Different Water/n-Hexanol/Cyclohexane Compositions Calculated by Titration 1H NMR.

    PubMed

    Flores, Mario E; Shibue, Toshimichi; Sugimura, Natsuhiko; Nishide, Hiroyuki; Moreno-Villoslada, Ignacio

    2017-11-09

    Upon titration of n-hexanol/cyclohexane mixtures of different molar compositions with water, water/n-hexanol clusters are formed in cyclohexane. Here, we develop a new method to estimate the water and n-hexanol aggregation numbers in the clusters that combines integration analysis in one-dimensional 1 H NMR spectra, diffusion coefficients calculated by diffusion-ordered NMR spectroscopy, and further application of the Stokes-Einstein equation to calculate the hydrodynamic volume of the clusters. Aggregation numbers of 5-15 molecules of n-hexanol per cluster in the absence of water were observed in the whole range of n-hexanol/cyclohexane molar fractions studied. After saturation with water, aggregation numbers of 6-13 n-hexanol and 0.5-5 water molecules per cluster were found. O-H and O-O atom distances related to hydrogen bonds between donor/acceptor molecules were theoretically calculated using density functional theory. The results show that at low n-hexanol molar fractions, where a robust hydrogen-bond network is held between n-hexanol molecules, addition of water makes the intermolecular O-O atom distance shorter, reinforcing molecular association in the clusters, whereas at high n-hexanol molar fractions, where dipole-dipole interactions dominate, addition of water makes the intermolecular O-O atom distance longer, weakening the cluster structure. This correlates with experimental NMR results, which show an increase in the size and aggregation number in the clusters upon addition of water at low n-hexanol molar fractions, and a decrease of these magnitudes at high n-hexanol molar fractions. In addition, water produces an increase in the proton exchange rate between donor/acceptor molecules at all n-hexanol molar fractions.

  7. Inferring species trees from incongruent multi-copy gene trees using the Robinson-Foulds distance

    PubMed Central

    2013-01-01

    Background Constructing species trees from multi-copy gene trees remains a challenging problem in phylogenetics. One difficulty is that the underlying genes can be incongruent due to evolutionary processes such as gene duplication and loss, deep coalescence, or lateral gene transfer. Gene tree estimation errors may further exacerbate the difficulties of species tree estimation. Results We present a new approach for inferring species trees from incongruent multi-copy gene trees that is based on a generalization of the Robinson-Foulds (RF) distance measure to multi-labeled trees (mul-trees). We prove that it is NP-hard to compute the RF distance between two mul-trees; however, it is easy to calculate this distance between a mul-tree and a singly-labeled species tree. Motivated by this, we formulate the RF problem for mul-trees (MulRF) as follows: Given a collection of multi-copy gene trees, find a singly-labeled species tree that minimizes the total RF distance from the input mul-trees. We develop and implement a fast SPR-based heuristic algorithm for the NP-hard MulRF problem. We compare the performance of the MulRF method (available at http://genome.cs.iastate.edu/CBL/MulRF/) with several gene tree parsimony approaches using gene tree simulations that incorporate gene tree error, gene duplications and losses, and/or lateral transfer. The MulRF method produces more accurate species trees than gene tree parsimony approaches. We also demonstrate that the MulRF method infers in minutes a credible plant species tree from a collection of nearly 2,000 gene trees. Conclusions Our new phylogenetic inference method, based on a generalized RF distance, makes it possible to quickly estimate species trees from large genomic data sets. Since the MulRF method, unlike gene tree parsimony, is based on a generic tree distance measure, it is appealing for analyses of genomic data sets, in which many processes such as deep coalescence, recombination, gene duplication and losses as well as phylogenetic error may contribute to gene tree discord. In experiments, the MulRF method estimated species trees accurately and quickly, demonstrating MulRF as an efficient alternative approach for phylogenetic inference from large-scale genomic data sets. PMID:24180377

  8. Non-Born-Oppenheimer calculations of the pure vibrational spectrum of HeH+.

    PubMed

    Pavanello, Michele; Bubin, Sergiy; Molski, Marcin; Adamowicz, Ludwik

    2005-09-08

    Very accurate calculations of the pure vibrational spectrum of the HeH(+) ion are reported. The method used does not assume the Born-Oppenheimer approximation, and the motion of both the electrons and the nuclei are treated on equal footing. In such an approach the vibrational motion cannot be decoupled from the motion of electrons, and thus the pure vibrational states are calculated as the states of the system with zero total angular momentum. The wave functions of the states are expanded in terms of explicitly correlated Gaussian basis functions multipled by even powers of the internuclear distance. The calculations yielded twelve bound states and corresponding eleven transition energies. Those are compared with the pure vibrational transition energies extracted from the experimental rovibrational spectrum.

  9. A method to calculate the gamma ray detection efficiency of a cylindrical NaI (Tl) crystal

    NASA Astrophysics Data System (ADS)

    Ahmadi, S.; Ashrafi, S.; Yazdansetad, F.

    2018-05-01

    Given a wide range application of NaI(Tl) detector in industrial and medical sectors, computation of the related detection efficiency in different distances of a radioactive source, especially for calibration purposes, is the subject of radiation detection studies. In this work, a 2in both in radius and height cylindrical NaI (Tl) scintillator was used, and by changing the radial, axial, and diagonal positions of an isotropic 137Cs point source relative to the detector, the solid angles and the interaction probabilities of gamma photons with the detector's sensitive area have been calculated. The calculations present the geometric and intrinsic efficiency as the functions of detector's dimensions and the position of the source. The calculation model is in good agreement with experiment, and MCNPX simulation.

  10. Investigation on relationship between epicentral distance and growth curve of initial P-wave propagating in local heterogeneous media for earthquake early warning system

    NASA Astrophysics Data System (ADS)

    Okamoto, Kyosuke; Tsuno, Seiji

    2015-10-01

    In the earthquake early warning (EEW) system, the epicenter location and magnitude of earthquakes are estimated using the amplitude growth rate of initial P-waves. It has been empirically pointed out that the growth rate becomes smaller as epicentral distance becomes far regardless of the magnitude of earthquakes. So, the epicentral distance can be estimated from the growth rate using this empirical relationship. However, the growth rates calculated from different earthquakes at the same epicentral distance mark considerably different values from each other. Sometimes the growth rates of earthquakes having the same epicentral distance vary by 104 times. Qualitatively, it has been considered that the gap in the growth rates is due to differences in the local heterogeneities that the P-waves propagate through. In this study, we demonstrate theoretically how local heterogeneities in the subsurface disturb the relationship between the growth rate and the epicentral distance. Firstly, we calculate seismic scattered waves in a heterogeneous medium. First-ordered PP, PS, SP, and SS scatterings are considered. The correlation distance of the heterogeneities and fractional fluctuation of elastic parameters control the heterogeneous conditions for the calculation. From the synthesized waves, the growth rate of the initial P-wave is obtained. As a result, we find that a parameter (in this study, correlation distance) controlling heterogeneities plays a key role in the magnitude of the fluctuation of the growth rate. Then, we calculate the regional correlation distances in Japan that can account for the fluctuation of the growth rate of real earthquakes from 1997 to 2011 observed by K-NET and KiK-net. As a result, the spatial distribution of the correlation distance shows locality. So, it is revealed that the growth rates fluctuate according to the locality. When this local fluctuation is taken into account, the accuracy of the estimation of epicentral distances from initial P-waves can improve, which will in turn improve the accuracy of the EEW system.

  11. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavanello, Michele; Van Voorhis, Troy; Visscher, Lucas

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlapmore » matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.« less

  12. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings.

    PubMed

    Pavanello, Michele; Van Voorhis, Troy; Visscher, Lucas; Neugebauer, Johannes

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Ångstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.

  13. Lateral migration of a microdroplet under optical forces in a uniform flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Hyunjun; Chang, Cheong Bong; Jung, Jin Ho

    2014-12-15

    The behavior of a microdroplet in a uniform flow and subjected to a vertical optical force applied by a loosely focused Gaussian laser beam was studied numerically. The lattice Boltzmann method was applied to obtain the two-phase flow field, and the dynamic ray tracing method was adopted to calculate the optical force. The optical forces acting on the spherical droplets agreed well with the analytical values. The numerically predicted droplet migration distances agreed well with the experimentally obtained values. Simulations of the various flow and optical parameters showed that the droplet migration distance nondimensionalized by the droplet radius is proportionalmore » to the S number (z{sub d}/r{sub p} = 0.377S), which is the ratio of the optical force to the viscous drag. The effect of the surface tension was also examined. These results indicated that the surface tension influenced the droplet migration distance to a lesser degree than the flow and optical parameters. The results of the present work hold for the refractive indices of the mean fluid and the droplet being 1.33 and 1.59, respectively.« less

  14. Elastic-wave-mode separation in TTI media with inverse-distance weighted interpolation involving position shading

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu

    2017-10-01

    The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.

  15. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  16. Ambulatory estimation of mean step length during unconstrained walking by means of COG accelerometry.

    PubMed

    González, R C; Alvarez, D; López, A M; Alvarez, J C

    2009-12-01

    It has been reported that spatio-temporal gait parameters can be estimated using an accelerometer to calculate the vertical displacement of the body's centre of gravity. This method has the potential to produce realistic ambulatory estimations of those parameters during unconstrained walking. In this work, we want to evaluate the crude estimations of mean step length so obtained, for their possible application in the construction of an ambulatory walking distance measurement device. Two methods have been tested with a set of volunteers in 20 m excursions. Experimental results show that estimations of walking distance can be obtained with sufficient accuracy and precision for most practical applications (errors of 3.66 +/- 6.24 and 0.96 +/- 5.55%), the main difficulty being inter-individual variability (biggest deviations of 19.70 and 15.09% for each estimator). Also, the results indicate that an inverted pendulum model for the displacement during the single stance phase, and a constant displacement per step during double stance, constitute a valid model for the travelled distance with no need of further adjustments. It allows us to explain the main part of the erroneous distance estimations in different subjects as caused by fundamental limitations of the simple inverted pendulum approach.

  17. Assessing the acoustical climate of underground stations.

    PubMed

    Nowicka, Elzbieta

    2007-01-01

    Designing a proper acoustical environment--indispensable to speech recognition--in long enclosures is difficult. Although there is some literature on the acoustical conditions in underground stations, there is still little information about methods that make estimation of correct reverberation conditions possible. This paper discusses the assessment of the reverberation conditions of underground stations. A comparison of the measurements of reverberation time in Warsaw's underground stations with calculated data proves there are divergences between measured and calculated early decay time values, especially for long source-receiver distances. Rapid speech transmission index values for measured stations are also presented.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benites, J.; Alumno del Posgrado en CBAP, Universidad Autonoma de Nayarit, Carretera Tepic-Compostela km9. C.P. 63780. Xalisco-Nayarit-Mexico; Vega-Carrillo, H. R.

    Neutron spectra and the ambient dose equivalent were calculated inside the bunker of a 15 MV Varian linac model CLINAC iX. Calculations were carried out using Monte Carlo methods. Neutron spectra in the vicinity of isocentre show the presence of evaporation and knock-on neutrons produced by the source term, while epithermal and thermal neutron remain constant regardless the distance respect to isocentre, due to room return. Along the maze neutron spectra becomes softer as the detector moves along the maze. The ambient dose equivalent is decreased but do not follow the 1/r{sup 2} rule due to changes in the neutronmore » spectra.« less

  19. Formation of 2D nanoparticles with block structure in simultaneous electric explosion of conductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kryzhevich, Dmitrij S., E-mail: kryzhev@ispms.ru, E-mail: kost@ispms.ru; Zolnikov, Konstantin P., E-mail: kryzhev@ispms.ru, E-mail: kost@ispms.ru; Abdrashitov, Andrei V.

    2014-11-14

    A molecular dynamics simulation of nanoparticle formation in simultaneous electric explosion of conductors is performed. Interatomic interaction is described using potentials calculated in the framework of the embedded atom method. High-rate heating results in failure of the conductors with the formation of nanoparticles. The influence of the heating rate, temperature distribution over the specimen cross-section and the distance between simultaneously exploded conductors on the structure of formed nanoparticles is studied. The calculation results show that the electric explosion of conductors allows the formation of nanoparticles with block structure.

  20. The effect of thermal neutron field slagging caused by cylindrical BF3 counters in diffusion media

    NASA Technical Reports Server (NTRS)

    Gorshkov, G. V.; Tsvetkov, O. S.; Yakovlev, R. M.

    1975-01-01

    Computations are carried out in transport approximation (first collision method) for the attenuation of the field of thermal neutrons formed in counters of the CHM-8 and CHMO-5 type. The deflection of the thermal neutron field is also obtained near the counters and in the air (shade effect) and in various decelerating media (water, paraffin, plexiglas) for which the calculations are carried out on the basis of diffusion theory. To verify the calculations, the distribution of the density of the thermal neutrons at various distances from the counter in the water is measured.

  1. Linear ground-water flow, flood-wave response program for programmable calculators

    USGS Publications Warehouse

    Kernodle, John Michael

    1978-01-01

    Two programs are documented which solve a discretized analytical equation derived to determine head changes at a point in a one-dimensional ground-water flow system. The programs, written for programmable calculators, are in widely divergent but commonly encountered languages and serve to illustrate the adaptability of the linear model to use in situations where access to true computers is not possible or economical. The analytical method assumes a semi-infinite aquifer which is uniform in thickness and hydrologic characteristics, bounded on one side by an impermeable barrier and on the other parallel side by a fully penetrating stream in complete hydraulic connection with the aquifer. Ground-water heads may be calculated for points along a line which is perpendicular to the impermeable barrie and the fully penetrating stream. Head changes at the observation point are dependent on (1) the distance between that point and the impermeable barrier, (2) the distance between the line of stress (the stream) and the impermeable barrier, (3) aquifer diffusivity, (4) time, and (5) head changes along the line of stress. The primary application of the programs is to determine aquifer diffusivity by the flood-wave response technique. (Woodard-USGS)

  2. Gypsum under pressure: A first-principles study

    NASA Astrophysics Data System (ADS)

    Giacomazzi, Luigi; Scandolo, Sandro

    2010-02-01

    We investigate by means of first-principles methods the structural response of gypsum (CaSO4ṡ2H2O) to pressures within and above the stability range of gypsum-I (P≤4GPa) . Structural and vibrational properties calculated for gypsum-I are in excellent agreement with experimental data. Compression within gypsum-I takes place predominantly through a reduction in the volume of the CaO8 polyhedra and through a distortion of the hydrogen bonds. The distance between CaSO4 layers becomes increasingly incompressible, indicating a mechanical limit to the packing of water molecules between the layers. We find that a structure with collapsed interlayer distances becomes more stable than gypsum-I above about 5 GPa. The collapse is concomitant with a rearrangement of the hydrogen-bond network of the water molecules. Comparison of the vibrational spectra calculated for this structure with experimental data taken above 5 GPa supports the validity of our model for the high-pressure phase of gypsum.

  3. Calculation of exchange interaction for modified Gaussian coupled quantum dots

    NASA Astrophysics Data System (ADS)

    Khordad, R.

    2017-08-01

    A system of two laterally coupled quantum dots with modified Gaussian potential has been considered. Each quantum dot has an electron under electric and magnetic field. The quantum dots have been considered as hydrogen-like atoms. The physical picture has translated into the Heisenberg spin Hamiltonian. The Schrödinger equation using finite element method has been numerically solved. The exchange energy factor has been calculated as a functions of electric field, magnetic field, and the separation distance between the centers of the dots ( d). According to the results, it is found that there is the transition from anti-ferromagnetic to ferromagnetic for constant electric field. Also, the transition occurs from ferromagnetic to anti-ferromagnetic for constant magnetic field (B>1 T). With decreasing the distance between the centers of the dots and increasing magnetic field, the transition occurs from anti-ferromagnetic to ferromagnetic. It is found that a switching of exchange energy factor is presented without canceling the interactions of the electric and magnetic fields on the system.

  4. Finding the average speed of a light-emitting toy car with a smartphone light sensor

    NASA Astrophysics Data System (ADS)

    Kapucu, Serkan

    2017-07-01

    This study aims to demonstrate how the average speed of a light-emitting toy car may be determined using a smartphone’s light sensor. The freely available Android smartphone application, ‘AndroSensor’, was used for the experiment. The classroom experiment combines complementary physics knowledge of optics and kinematics to find the average speed of a moving object. The speed of the toy car is found by determining the distance between the light-emitting toy car and the smartphone, and the time taken to travel these distances. To ensure that the average speed of the toy car calculated with the help of the AndroSensor was correct, the average speed was also calculated by analyzing video-recordings of the toy car. The resulting speeds found with these different methods were in good agreement with each other. Hence, it can be concluded that reliable measurements of the average speed of light-emitting objects can be determined with the help of the light sensor of an Android smartphone.

  5. Dynamo magnetic field modes in thin astrophysical disks - An adiabatic computational approximation

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Levy, E. H.

    1991-01-01

    An adiabatic approximation is applied to the calculation of turbulent MHD dynamo magnetic fields in thin disks. The adiabatic method is employed to investigate conditions under which magnetic fields generated by disk dynamos permeate the entire disk or are localized to restricted regions of a disk. Two specific cases of Keplerian disks are considered. In the first, magnetic field diffusion is assumed to be dominated by turbulent mixing leading to a dynamo number independent of distance from the center of the disk. In the second, the dynamo number is allowed to vary with distance from the disk's center. Localization of dynamo magnetic field structures is found to be a general feature of disk dynamos, except in the special case of stationary modes in dynamos with constant dynamo number. The implications for the dynamical behavior of dynamo magnetized accretion disks are discussed and the results of these exploratory calculations are examined in the context of the protosolar nebula and accretion disks around compact objects.

  6. Evaluating the distance between the femoral tunnel centers in anatomic double-bundle anterior cruciate ligament reconstruction using a computer simulation

    PubMed Central

    Tashiro, Yasutaka; Okazaki, Ken; Iwamoto, Yukihide

    2015-01-01

    Purpose We aimed to clarify the distance between the anteromedial (AM) bundle and posterolateral (PL) bundle tunnel-aperture centers by simulating the anatomical femoral tunnel placement during double-bundle anterior cruciate ligament reconstruction using 3-D computer-aided design models of the knee, in order to discuss the risk of tunnel overlap. Relationships between the AM to PL center distance, body height, and sex difference were also analyzed. Patients and methods The positions of the AM and PL tunnel centers were defined based on previous studies using the quadrant method, and were superimposed anatomically onto the 3-D computer-aided design knee models from 68 intact femurs. The distance between the tunnel centers was measured using the 3-D DICOM software package. The correlation between the AM–PL distance and the subject’s body height was assessed, and a cutoff height value for a higher risk of overlap of the AM and PL tunnel apertures was identified. Results The distance between the AM and PL centers was 10.2±0.6 mm in males and 9.4±0.5 mm in females (P<0.01). The AM–PL center distance demonstrated good correlation with body height in both males (r=0.66, P<0.01) and females (r=0.63, P<0.01). When 9 mm was defined as the critical distance between the tunnel centers to preserve a 2 mm bony bridge between the two tunnels, the cutoff value was calculated to be a height of 160 cm in males and 155 cm in females. Conclusion When AM and PL tunnels were placed anatomically in simulated double-bundle anterior cruciate ligament reconstruction, the distance between the two tunnel centers showed a strong positive correlation with body height. In cases with relatively short stature, the AM and PL tunnel apertures are considered to be at a higher risk of overlap when surgeons choose the double-bundle technique. PMID:26170727

  7. The Elimination of Transfer Distances Is an Important Part of Hospital Design.

    PubMed

    Karvonen, Sauli; Nordback, Isto; Elo, Jussi; Havulinna, Jouni; Laine, Heikki-Jussi

    2017-04-01

    The objective of the present study was to describe how a specific patient flow analysis with from-to charts can be used in hospital design and layout planning. As part of a large renewal project at a university hospital, a detailed patient flow analysis was applied to planning the musculoskeletal surgery unit (orthopedics and traumatology, hand surgery, and plastic surgery). First, the main activities of the unit were determined. Next, the routes of all patients treated over the course of 1 year were studied, and their physical movements in the current hospital were calculated. An ideal layout of the new hospital was then generated to minimize transfer distances by placing the main activities with close to each other, according to the patient flow analysis. The actual architectural design was based on the ideal layout plan. Finally, we compared the current transfer distances to the distances patients will move in the new hospital. The methods enabled us to estimate an approximate 50% reduction in transfer distances for inpatients (from 3,100 km/year to 1,600 km/year) and 30% reduction for outpatients (from 2,100 km/year to 1,400 km/year). Patient transfers are nonvalue-added activities. This study demonstrates that a detailed patient flow analysis with from-to charts can substantially shorten transfer distances, thereby minimizing extraneous patient and personnel movements. This reduction supports productivity improvement, cross-professional teamwork, and patient safety by placing all patient flow activities close to each other. Thus, this method is a valuable additional tool in hospital design.

  8. Carotid-Femoral Pulse Wave Velocity: Impact of Different Arterial Path Length Measurements

    PubMed Central

    Sugawara, Jun; Hayashi, Koichiro; Yokoi, Takashi; Tanaka, Hirofumi

    2009-01-01

    Background Carotid-femoral pulse wave velocity (PWV) is the most established index of arterial stiffness. Yet there is no consensus on the methodology in regard to the arterial path length measurements conducted on the body surface. Currently, it is not known to what extent the differences in the arterial path length measurements affect absolute PWV values. Methods Two hundred fifty apparently healthy adults (127 men and 123 women, 19-79 years) were studied. Carotid-femoral PWV was calculated using (1) the straight distance between carotid and femoral sites (PWVcar–fem), (2) the straight distance between suprasternal notch and femoral site minus carotid arterial length (PWV(ssn–fem)-(ssn–car)), (3) the straight distance between carotid and femoral sites minus carotid arterial length (PWV(car–fem)-(ssn–car)), and (4) the combined distance from carotid site to the umbilicus and from the umbilicus to femoral site minus carotid arterial length (PWV(ssn–umb–fem)-(ssn–car)). Results All the calculated PWV were significantly correlated with each other (r=0.966-0.995). PWV accounting for carotid arterial length were 16-31% lower than PWVcar–fem. PWVcar–fem value of 12 m/sec corresponded to 8.3 m/sec for PWV(ssn–fem)-(ssn–car), 10.0 m/sec for PWV(car–fem)-(ssn–car), and 8.9 m/sec for PWV(ssn–umb–fem)-(ssn–car). Conclusion Different body surface measurements used to estimate arterial path length would produce substantial variations in absolute PWV values. PMID:20396400

  9. The application of vector concepts on two skew lines

    NASA Astrophysics Data System (ADS)

    Alghadari, F.; Turmudi; Herman, T.

    2018-01-01

    The purpose of this study is knowing how to apply vector concepts on two skew lines in three-dimensional (3D) coordinate and its utilization. Several mathematical concepts have a related function for the other, but the related between the concept of vector and 3D have not applied in learning classroom. In fact, there are studies show that female students have difficulties in learning of 3D than male. It is because of personal spatial intelligence. The relevance of vector concepts creates both learning achievement and mathematical ability of male and female students enables to be balanced. The distance like on a cube, cuboid, or pyramid whose are drawn on the rectangular coordinates of a point in space. Two coordinate points of the lines can be created a vector. The vector of two skew lines has the shortest distance and the angle. Calculating of the shortest distance is started to create two vectors as a representation of line by vector position concept, next to determining a norm-vector of two vector which was obtained by cross-product, and then to create a vector from two combination of pair-points which was passed by two skew line, the shortest distance is scalar orthogonal projection of norm-vector on a vector which is a combination of pair-points. While calculating the angle are used two vectors as a representation of line to dot-product, and the inverse of cosine is yield. The utilization of its application on mathematics learning and orthographic projection method.

  10. Dosimetry of 192Ir sources used for endovascular brachytherapy

    NASA Astrophysics Data System (ADS)

    Reynaert, N.; Van Eijkeren, M.; Taeymans, Y.; Thierens, H.

    2001-02-01

    An in-phantom calibration technique for 192Ir sources used for endovascular brachytherapy is presented. Three different source lengths were investigated. The calibration was performed in a solid phantom using a Farmer-type ionization chamber at source to detector distances ranging from 1 cm to 5 cm. The dosimetry protocol for medium-energy x-rays extended with a volume-averaging correction factor was used to convert the chamber reading to dose to water. The air kerma strength of the sources was determined as well. EGS4 Monte Carlo calculations were performed to determine the depth dose distribution at distances ranging from 0.6 mm to 10 cm from the source centre. In this way we were able to convert the absolute dose rate at 1 cm distance to the reference point chosen at 2 mm distance. The Monte Carlo results were confirmed by radiochromic film measurements, performed with a double-exposure technique. The dwell times to deliver a dose of 14 Gy at the reference point were determined and compared with results given by the source supplier (CORDIS). They determined the dwell times from a Sievert integration technique based on the source activity. The results from both methods agreed to within 2% for the 12 sources that were evaluated. A Visual Basic routine that superimposes dose distributions, based on the Monte Carlo calculations and the in-phantom calibration, onto intravascular ultrasound images is presented. This routine can be used as an online treatment planning program.

  11. Ground-state energy of HeH{sup +}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Binglu; Zhu Jiongming; Yan Zongchao

    2006-06-15

    The nonrelativistic ground-state energy of {sup 4}HeH{sup +} is calculated using a variational method in Hylleraas coordinates. Convergence to a few parts in 10{sup 10} is achieved, which improves the best previous result of Pavanello et al. [J. Chem. Phys. 123, 104306 (2005)]. Expectation values of the interparticle distances are evaluated. Similar results for {sup 3}HeH{sup +} are also presented.

  12. Classification of Company Performance using Weighted Probabilistic Neural Network

    NASA Astrophysics Data System (ADS)

    Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi

    2018-05-01

    Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.

  13. Numerical analysis of the shifting slabs applied in a wireless power transfer system to enhance magnetic coupling

    NASA Astrophysics Data System (ADS)

    Dong, Yayun; Yang, Xijun; Jin, Nan; Li, Wenwen; Yao, Chen; Tang, Houjun

    2017-05-01

    Shifting medium is a kind of metamaterial, which can optically shift a space or an object a certain distance away from its original position. Based on the shifting medium, we propose a concise pair of shifting slabs covering the transmitting or receiving coil in a two-coil wireless power transfer system to decrease the equivalent distance between the coils. The electromagnetic parameters of the shifting slabs are calculated by transformation optics. Numerical simulations validate that the shifting slabs can approximately shift the electromagnetic fields generated by the covered coil; thus, the magnetic coupling and the efficiency of the system are enhanced while remaining the physical transmission distance unchanged. We also verify the advantages of the shifting slabs over the magnetic superlens. Finally, we provide two methods to fabricate shifting slabs based on split-ring resonators.

  14. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  15. Comparison of Various Similarity Measures for Average Image Hash in Mobile Phone Application

    NASA Astrophysics Data System (ADS)

    Farisa Chaerul Haviana, Sam; Taufik, Muhammad

    2017-04-01

    One of the main issue in Content Based Image Retrieval (CIBR) is similarity measures for resulting image hashes. The main key challenge is to find the most benefits distance or similarity measures for calculating the similarity in term of speed and computing costs, specially under limited computing capabilities device like mobile phone. This study we utilize twelve most common and popular distance or similarity measures technique implemented in mobile phone application, to be compared and studied. The results show that all similarity measures implemented in this study was perform equally under mobile phone application. This gives more possibilities for method combinations to be implemented for image retrieval.

  16. General practice cooperatives: long waiting times for home visits due to long distances?

    PubMed Central

    Giesen, Paul; van Lin, Nieke; Mokkink, Henk; van den Bosch, Wil; Grol, Richard

    2007-01-01

    Background The introduction of large-scale out-of-hours GP cooperatives has led to questions about increased distances between the GP cooperatives and the homes of patients and the increasing waiting times for home visits in urgent cases. We studied the relationship between the patient's waiting time for a home visit and the distance to the GP cooperative. Further, we investigated if other factors (traffic intensity, home visit intensity, time of day, and degree of urgency) influenced waiting times. Methods Cross-sectional study at four GP cooperatives. We used variance analysis to calculate waiting times for various categories of traffic intensity, home visit intensity, time of day, and degree of urgency. We used multiple logistic regression analysis to calculate to what degree these factors affected the ability to meet targets in urgent cases. Results The average waiting time for 5827 consultations was 30.5 min. Traffic intensity, home visit intensity, time of day and urgency of the complaint all seemed to affect waiting times significantly. A total of 88.7% of all patients were seen within 1 hour. In the case of life-threatening complaints (U1), 68.8% of the patients were seen within 15 min, and 95.6% of those with acute complaints (U2) were seen within 1 hour. For patients with life-threatening complaints (U1) the percentage of visits that met the time target of 15 minuts decreased from 86.5% (less than 2.5 km) to 16.7% (equals or more than 20 km). Discussion and conclusion Although home visits waiting times increase with increasing distance from the GP cooperative, it appears that traffic intensity, home visit intensity, and urgency also influence waiting times. For patients with life-threatening complaints waiting times increase sharply with the distance. PMID:17295925

  17. Optimization and Analysis of Laser Beam Machining Parameters for Al7075-TiB2 In-situ Composite

    NASA Astrophysics Data System (ADS)

    Manjoth, S.; Keshavamurthy, R.; Pradeep Kumar, G. S.

    2016-09-01

    The paper focuses on laser beam machining (LBM) of In-situ synthesized Al7075-TiB2 metal matrix composite. Optimization and influence of laser machining process parameters on surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy of composites were studied. Al7075-TiB2 metal matrix composite was synthesized by in-situ reaction technique using stir casting process. Taguchi's L9 orthogonal array was used to design experimental trials. Standoff distance (SOD) (0.3 - 0.5mm), Cutting Speed (1000 - 1200 m/hr) and Gas pressure (0.5 - 0.7 bar) were considered as variable input parameters at three different levels, while power and nozzle diameter were maintained constant with air as assisting gas. Optimized process parameters for surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy were calculated by generating the main effects plot for signal noise ratio (S/N ratio) for surface roughness, VMRR and dimensional error using Minitab software (version 16). The Significant of standoff distance (SOD), cutting speed and gas pressure on surface roughness, volumetric material removal rate (VMRR) and dimensional error were calculated using analysis of variance (ANOVA) method. Results indicate that, for surface roughness, cutting speed (56.38%) is most significant parameter followed by standoff distance (41.03%) and gas pressure (2.6%). For volumetric material removal (VMRR), gas pressure (42.32%) is most significant parameter followed by cutting speed (33.60%) and standoff distance (24.06%). For dimensional error, Standoff distance (53.34%) is most significant parameter followed by cutting speed (34.12%) and gas pressure (12.53%). Further, verification experiments were carried out to confirm performance of optimized process parameters.

  18. Accuracy of a separating foil impression using a novel polyolefin foil compared to a custom tray and a stock tray technique

    PubMed Central

    Pastoret, Marie-Hélène; Bühler, Julia; Weiger, Roland

    2017-01-01

    PURPOSE To compare the dimensional accuracy of three impression techniques- a separating foil impression, a custom tray impression, and a stock tray impression. MATERIALS AND METHODS A machined mandibular complete-arch metal model with special modifications served as a master cast. Three different impression techniques (n = 6 in each group) were performed with addition-cured silicon materials: i) putty-wash technique with a prefabricated metal tray (MET) using putty and regular body, ii) single-phase impression with custom tray (CUS) using regular body material, and iii) two-stage technique with stock metal tray (SEP) using putty with a separating foil and regular body material. All impressions were poured with epoxy resin. Six different distances (four intra-abutment and two inter-abutment distances) were gauged on the metal master model and on the casts with a microscope in combination with calibrated measuring software. The differences of the evaluated distances between the reference and the three test groups were calculated and expressed as mean (± SD). Additionally, the 95% confidence intervals were calculated and significant differences between the experimental groups were assumed when confidence intervals did not overlap. RESULTS Dimensional changes compared to reference values varied between -74.01 and 32.57 µm (MET), -78.86 and 30.84 (CUS), and between -92.20 and 30.98 (SEP). For the intra-abutment distances, no significant differences among the experimental groups were detected. CUS showed a significantly higher dimensional accuracy for the inter-abutment distances with -0.02 and -0.08 percentage deviation compared to MET and SEP. CONCLUSION The separation foil technique is a simple alternative to the custom tray technique for single tooth restorations, while limitations may exist for extended restorations with multiple abutment teeth. PMID:28874996

  19. Polymer Uncrossing and Knotting in Protein Folding, and Their Role in Minimal Folding Pathways

    PubMed Central

    Mohazab, Ali R.; Plotkin, Steven S.

    2013-01-01

    We introduce a method for calculating the extent to which chain non-crossing is important in the most efficient, optimal trajectories or pathways for a protein to fold. This involves recording all unphysical crossing events of a ghost chain, and calculating the minimal uncrossing cost that would have been required to avoid such events. A depth-first tree search algorithm is applied to find minimal transformations to fold , , , and knotted proteins. In all cases, the extra uncrossing/non-crossing distance is a small fraction of the total distance travelled by a ghost chain. Different structural classes may be distinguished by the amount of extra uncrossing distance, and the effectiveness of such discrimination is compared with other order parameters. It was seen that non-crossing distance over chain length provided the best discrimination between structural and kinetic classes. The scaling of non-crossing distance with chain length implies an inevitable crossover to entanglement-dominated folding mechanisms for sufficiently long chains. We further quantify the minimal folding pathways by collecting the sequence of uncrossing moves, which generally involve leg, loop, and elbow-like uncrossing moves, and rendering the collection of these moves over the unfolded ensemble as a multiple-transformation “alignment”. The consensus minimal pathway is constructed and shown schematically for representative cases of an , , and knotted protein. An overlap parameter is defined between pathways; we find that proteins have minimal overlap indicating diverse folding pathways, knotted proteins are highly constrained to follow a dominant pathway, and proteins are somewhere in between. Thus we have shown how topological chain constraints can induce dominant pathway mechanisms in protein folding. PMID:23365638

  20. Analysis of helium-rich subdwarf O stars. 1: NLTE models, methods, and fits for 21 Palomar Green survey sdOs

    NASA Technical Reports Server (NTRS)

    Thejll, P.; Bauer, F.; Saffer, R.; Liebert, J.; Kunze, D.; Shipman, H. L.

    1994-01-01

    Atmospheric parameters for 21 helium-rich hot subdwarf O stars from the Palomar Green survey are found from fits of non-local thermodynamic equilibrium (NLTE) models to optical spectra. About 250 new NLTE models in the parameter range T(sub eff) from 35,000 to 65,000 K, log (g) from 4.0 to 6.5, and epsilon(He) from 50% He to 99% He have been calculated. A fit for each object is presented. Estimated distances and luminosities are calculated, assuming a mass of 0.5 solar masses. Large distances above the Galactic plane are found, and this implies that the majority of the sdO stars belong to a different stellar population than the sdB, planetary nebulae and white dwarf stars. Possibilities may include the halo and a 'thick disk.' Kinematical data from the literature are also discussed in view of the found distances. Five of the stars have been observed, modeled, and analyzed by Dreizler et al., and significant differences exist between their results and ours for these stars. The reason for these differences is not yet known, but the problem does not alter the conclusions about population membership that we present.

  1. On the concept of critical surface excess of micellization.

    PubMed

    Talens-Alesson, Federico I

    2010-11-16

    The critical surface excess of micellization (CSEM) should be regarded as the critical condition for micellization of ionic surfactants instead of the critical micelle concentration (CMC). There is a correspondence between the surface excesses Γ of anionic, cationic, and zwitterionic surfactants at their CMCs, which would be the CSEM values, and the critical association distance for ionic pair association calculated using Bjerrum's correlation. Further support to this concept is given by an accurate method for the prediction of the relative binding of alkali cations onto dodecylsulfate (NaDS) micelles. This method uses a relative binding strength parameter calculated from the values of surface excess Γ at the CMC of the alkali dodecylsulfates. This links both the binding of a given cation onto micelles and the onset for micellization of its surfactant salt. The CSEM concept implies that micelles form at the air-water interface unless another surface with greater affinity for micelles exists. The process would start when surfactant monomers are close enough to each other for ionic pairing with counterions and the subsequent assembly of these pairs becomes unavoidable. This would explain why the surface excess Γ values of different surfactants are more similar than their CMCs: the latter are just the bulk phase concentrations in equilibrium with chemicals with different hydrophobicity. An intriguing implication is that CSEM values may be used to calculate the actual critical distances of ionic pair formation for different cations, replacing Bjerrum's estimates, which only discriminate by the magnitude of the charge.

  2. Spatial correlation of probabilistic earthquake ground motion and loss

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.

    2001-01-01

    Spatial correlation of annual earthquake ground motions and losses can be used to estimate the variance of annual losses to a portfolio of properties exposed to earthquakes A direct method is described for the calculations of the spatial correlation of earthquake ground motions and losses. Calculations for the direct method can be carried out using either numerical quadrature or a discrete, matrix-based approach. Numerical results for this method are compared with those calculated from a simple Monte Carlo simulation. Spatial correlation of ground motion and loss is induced by the systematic attenuation of ground motion with distance from the source, by common site conditions, and by the finite length of fault ruptures. Spatial correlation is also strongly dependent on the partitioning of the variability, given an event, into interevent and intraevent components. Intraevent variability reduces the spatial correlation of losses. Interevent variability increases spatial correlation of losses. The higher the spatial correlation, the larger the variance in losses to a port-folio, and the more likely extreme values become. This result underscores the importance of accurately determining the relative magnitudes of intraevent and interevent variability in ground-motion studies, because of the strong impact in estimating earthquake losses to a portfolio. The direct method offers an alternative to simulation for calculating the variance of losses to a portfolio, which may reduce the amount of calculation required.

  3. Calibration of а single hexagonal NaI(Tl) detector using a new numerical method based on the efficiency transfer method

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud I.; Badawi, M. S.; Ruskov, I. N.; El-Khatib, A. M.; Grozdanov, D. N.; Thabet, A. A.; Kopatch, Yu. N.; Gouda, M. M.; Skoy, V. R.

    2015-01-01

    Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.

  4. Estimating the Distance to the Moon--Its Relevance to Mathematics. Core-Plus Mathematics Project.

    ERIC Educational Resources Information Center

    Stern, David P.

    This document features an activity for estimating the distance from the earth to the moon during a solar eclipse based on calculations performed by the ancient Greek astronomer Hipparchus. Historical, mathematical, and scientific details about the calculation are provided. Internet resources for teachers to obtain more information on the subject…

  5. An empirical formula to calculate the full energy peak efficiency of scintillation detectors.

    PubMed

    Badawi, Mohamed S; Abd-Elzaher, Mohamed; Thabet, Abouzeid A; El-khatib, Ahmed M

    2013-04-01

    This work provides an empirical formula to calculate the FEPE for different detectors using the effective solid angle ratio derived from experimental measurements. The full energy peak efficiency (FEPE) curves of the (2″(*)2″) NaI(Tl) detector at different seven axial distances from the detector were depicted in a wide energy range from 59.53 to 1408keV using standard point sources. The distinction was based on the effects of the source energy and the source-to-detector distance. A good agreement was noticed between the measured and calculated efficiency values for the source-to-detector distances at 20, 25, 30, 35, 40, 45 and 50cm. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pease, J.H.

    The three dimensional structures of several small peptides were determined using a combination of {sup 1}H nuclear magnetic resonance (NMR) and distance geometry calculations. These techniques were found to be particularly helpful for analyzing structural differences between related peptides since all of the peptides' {sup 1}H NMR spectra are very similar. The structures of peptides from two separate classes are presented. Peptides in the first class are related to apamin, an 18 amino acid peptide toxin from honey bee venom. The {sup 1}H NMR assignments and secondary structure determination of apamin were done previously. Quantitative NMR measurements and distance geometrymore » calculations were done to calculate apamin's three dimensional structure. Peptides in the second class are 48 amino acid toxins from the sea anemone Radianthus paumotensis. The {sup 1}H NMR assignments of toxin II were done previously. The {sup 1}H NMR assignments of toxin III and the distance geometry calculations for both peptides are presented.« less

  7. Investigating ground vibration to calculate the permissible charge weight for blasting operations of Gotvand-Olya dam underground structures / Badania drgań gruntu w celu określenia dopuszczalnego ciężaru ładunku wybuchowego przy pracach strzałowych w podziemnych elementach tamy w Gotvand-Olya

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Bakhshandeh Amnieh, Hassan; Bahadori, Moein

    2012-12-01

    Ground vibration, air vibration, fly rock, undesirable displacement and fragmentation are some inevitable side effects of blasting operations that can cause serious damage to the surrounding environment. Peak Particle Velocity (PPV) is the main criterion in the assessment of the amount of damage caused by ground vibration. There are different standards for the determination of the safe level of the PPV. To calculate the permissible amount of the explosive to control the damage to the underground structures of Gotvand Olya dam, use was made of sixteen 3-component (totally 48) records generated from 4 blasts. These operations were recorded in 3 directions (radial, transverse and vertical) by four PG-2002 seismographs having GS-11D 3-component seismometers and the records were analyzed with the help of the DADISP software. To predict the PPV, use was made of the scaled distance and the Simulated Annealing (SA) hybrid methods. Using the scaled distance resulted in a relation for the prediction of the PPV; the precision of the relation was then increased to 0.94 with the help of the SA hybrid method. Relying on the high correlation of this relation and considering a minimum distance of 56.2 m to the center of the blast site and a permissible PPV of 178 mm/s (for a 2-day old concrete), the maximum charge weight per delay came out to be 212 Kg.

  8. EEG character identification using stimulus sequences designed to maximize mimimal hamming distance.

    PubMed

    Fukami, Tadanori; Shimada, Takamasa; Forney, Elliott; Anderson, Charles W

    2012-01-01

    In this study, we have improved upon the P300 speller Brain-Computer Interface paradigm by introducing a new character encoding method. Our concept in detection of the intended character is not based on a classification of target and nontarget responses, but based on an identifaction of the character which maximize the difference between P300 amplitudes in target and nontarget stimuli. Each bit included in the code corresponds to flashing character, '1', and non-flashing, '0'. Here, the codes were constructed in order to maximize the minimum hamming distance between the characters. Electroencephalography was used to identify the characters using a waveform calculated by adding and subtracting the response of the target and non-target stimulus according the codes respectively. This stimulus presentation method was applied to a 3×3 character matrix, and the results were compared with that of a conventional P300 speller of the same size. Our method reduced the time until the correct character was obtained by 24%.

  9. Interplay between strong correlation and adsorption distances: Co on Cu(001)

    NASA Astrophysics Data System (ADS)

    Bahlke, Marc Philipp; Karolak, Michael; Herrmann, Carmen

    2018-01-01

    Adsorbed transition metal atoms can have partially filled d or f shells due to strong on-site Coulomb interaction. Capturing all effects originating from electron correlation in such strongly correlated systems is a challenge for electronic structure methods. It requires a sufficiently accurate description of the atomistic structure (in particular bond distances and angles), which is usually obtained from first-principles Kohn-Sham density functional theory (DFT), which due to the approximate nature of the exchange-correlation functional may provide an unreliable description of strongly correlated systems. To elucidate the consequences of this popular procedure, we apply a combination of DFT with the Anderson impurity model (AIM), as well as DFT + U for a calculation of the potential energy surface along the Co/Cu(001) adsorption coordinate, and compare the results with those obtained from DFT. The adsorption minimum is shifted towards larger distances by applying DFT+AIM, or the much cheaper DFT +U method, compared to the corresponding spin-polarized DFT results, by a magnitude comparable to variations between different approximate exchange-correlation functionals (0.08 to 0.12 Å). This shift originates from an increasing correlation energy at larger adsorption distances, which can be traced back to the Co 3 dx y and 3 dz2 orbitals being more correlated as the adsorption distance is increased. We can show that such considerations are important, as they may strongly affect electronic properties such as the Kondo temperature.

  10. Study on the measuring distance for blood glucose infrared spectral measuring by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Li, Xiang

    2016-10-01

    Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.

  11. Testing in Microbiome-Profiling Studies with MiRKAT, the Microbiome Regression-Based Kernel Association Test

    PubMed Central

    Zhao, Ni; Chen, Jun; Carroll, Ian M.; Ringel-Kulka, Tamar; Epstein, Michael P.; Zhou, Hua; Zhou, Jin J.; Ringel, Yehuda; Li, Hongzhe; Wu, Michael C.

    2015-01-01

    High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Distance-based analysis is a popular strategy for evaluating the overall association between microbiome diversity and outcome, wherein the phylogenetic distance between individuals’ microbiome profiles is computed and tested for association via permutation. Despite their practical popularity, distance-based approaches suffer from important challenges, especially in selecting the best distance and extending the methods to alternative outcomes, such as survival outcomes. We propose the microbiome regression-based kernel association test (MiRKAT), which directly regresses the outcome on the microbiome profiles via the semi-parametric kernel machine regression framework. MiRKAT allows for easy covariate adjustment and extension to alternative outcomes while non-parametrically modeling the microbiome through a kernel that incorporates phylogenetic distance. It uses a variance-component score statistic to test for the association with analytical p value calculation. The model also allows simultaneous examination of multiple distances, alleviating the problem of choosing the best distance. Our simulations demonstrated that MiRKAT provides correctly controlled type I error and adequate power in detecting overall association. “Optimal” MiRKAT, which considers multiple candidate distances, is robust in that it suffers from little power loss in comparison to when the best distance is used and can achieve tremendous power gain in comparison to when a poor distance is chosen. Finally, we applied MiRKAT to real microbiome datasets to show that microbial communities are associated with smoking and with fecal protease levels after confounders are controlled for. PMID:25957468

  12. Insight on agglomerates of gold nanoparticles in glass based on surface plasmon resonance spectrum: study by multi-spheres T-matrix method

    NASA Astrophysics Data System (ADS)

    Avakyan, L. A.; Heinz, M.; Skidanenko, A. V.; Yablunovski, K. A.; Ihlemann, J.; Meinertz, J.; Patzig, C.; Dubiel, M.; Bugaev, L. A.

    2018-01-01

    The formation of a localized surface plasmon resonance (SPR) spectrum of randomly distributed gold nanoparticles in the surface layer of silicate float glass, generated and implanted by UV ArF-excimer laser irradiation of a thin gold layer sputter-coated on the glass surface, was studied by the T-matrix method, which enables particle agglomeration to be taken into account. The experimental technique used is promising for the production of submicron patterns of plasmonic nanoparticles (given by laser masks or gratings) without damage to the glass surface. Analysis of the applicability of the multi-spheres T-matrix (MSTM) method to the studied material was performed through calculations of SPR characteristics for differently arranged and structured gold nanoparticles (gold nanoparticles in solution, particles pairs, and core-shell silver-gold nanoparticles) for which either experimental data or results of the modeling by other methods are available. For the studied gold nanoparticles in glass, it was revealed that the theoretical description of their SPR spectrum requires consideration of the plasmon coupling between particles, which can be done effectively by MSTM calculations. The obtained statistical distributions over particle sizes and over interparticle distances demonstrated the saturation behavior with respect to the number of particles under consideration, which enabled us to determine the effective aggregate of particles, sufficient to form the SPR spectrum. The suggested technique for the fitting of an experimental SPR spectrum of gold nanoparticles in glass by varying the geometrical parameters of the particles aggregate in the recurring calculations of spectrum by MSTM method enabled us to determine statistical characteristics of the aggregate: the average distance between particles, average size, and size distribution of the particles. The fitting strategy of the SPR spectrum presented here can be applied to nanoparticles of any nature and in various substances, and, in principle, can be extended for particles with non-spherical shapes, like ellipsoids, rod-like and other T-matrix-solvable shapes.

  13. Kinematic Distances: A Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Wenger, Trey V.; Balser, Dana S.; Anderson, L. D.; Bania, T. M.

    2018-03-01

    Distances to high-mass star-forming regions (HMSFRs) in the Milky Way are a crucial constraint on the structure of the Galaxy. Only kinematic distances are available for a majority of the HMSFRs in the Milky Way. Here, we compare the kinematic and parallax distances of 75 Galactic HMSFRs to assess the accuracy of kinematic distances. We derive the kinematic distances using three different methods: the traditional method using the Brand & Blitz rotation curve (Method A), the traditional method using the Reid et al. rotation curve and updated solar motion parameters (Method B), and a Monte Carlo technique (Method C). Methods B and C produce kinematic distances closest to the parallax distances, with median differences of 13% (0.43 {kpc}) and 17% (0.42 {kpc}), respectively. Except in the vicinity of the tangent point, the kinematic distance uncertainties derived by Method C are smaller than those of Methods A and B. In a large region of the Galaxy, the Method C kinematic distances constrain both the distances and the Galactocentric positions of HMSFRs more accurately than parallax distances. Beyond the tangent point along ℓ = 30°, for example, the Method C kinematic distance uncertainties reach a minimum of 10% of the parallax distance uncertainty at a distance of 14 {kpc}. We develop a prescription for deriving and applying the Method C kinematic distances and distance uncertainties. The code to generate the Method C kinematic distances is publicly available and may be utilized through an online tool.

  14. Quantitative evaluation of deep and shallow tissue layers' contribution to fNIRS signal using multi-distance optodes and independent component analysis.

    PubMed

    Funane, Tsukasa; Atsumori, Hirokazu; Katura, Takusige; Obata, Akiko N; Sato, Hiroki; Tanikawa, Yukari; Okada, Eiji; Kiguchi, Masashi

    2014-01-15

    To quantify the effect of absorption changes in the deep tissue (cerebral) and shallow tissue (scalp, skin) layers on functional near-infrared spectroscopy (fNIRS) signals, a method using multi-distance (MD) optodes and independent component analysis (ICA), referred to as the MD-ICA method, is proposed. In previous studies, when the signal from the shallow tissue layer (shallow signal) needs to be eliminated, it was often assumed that the shallow signal had no correlation with the signal from the deep tissue layer (deep signal). In this study, no relationship between the waveforms of deep and shallow signals is assumed, and instead, it is assumed that both signals are linear combinations of multiple signal sources, which allows the inclusion of a "shared component" (such as systemic signals) that is contained in both layers. The method also assumes that the partial optical path length of the shallow layer does not change, whereas that of the deep layer linearly increases along with the increase of the source-detector (S-D) distance. Deep- and shallow-layer contribution ratios of each independent component (IC) are calculated using the dependence of the weight of each IC on the S-D distance. Reconstruction of deep- and shallow-layer signals are performed by the sum of ICs weighted by the deep and shallow contribution ratio. Experimental validation of the principle of this technique was conducted using a dynamic phantom with two absorbing layers. Results showed that our method is effective for evaluating deep-layer contributions even if there are high correlations between deep and shallow signals. Next, we applied the method to fNIRS signals obtained on a human head with 5-, 15-, and 30-mm S-D distances during a verbal fluency task, a verbal working memory task (prefrontal area), a finger tapping task (motor area), and a tetrametric visual checker-board task (occipital area) and then estimated the deep-layer contribution ratio. To evaluate the signal separation performance of our method, we used the correlation coefficients of a laser-Doppler flowmetry (LDF) signal and a nearest 5-mm S-D distance channel signal with the shallow signal. We demonstrated that the shallow signals have a higher temporal correlation with the LDF signals and with the 5-mm S-D distance channel than the deep signals. These results show the MD-ICA method can discriminate between deep and shallow signals. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    NASA Astrophysics Data System (ADS)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  16. A novel method for quantitative geosteering using azimuthal gamma-ray logging.

    PubMed

    Yuan, Chao; Zhou, Cancan; Zhang, Feng; Hu, Song; Li, Chaoliu

    2015-02-01

    A novel method for quantitative geosteering by using azimuthal gamma-ray logging is proposed. Real-time up and bottom gamma-ray logs when a logging tool travels through a boundary surface with different relative dip angles are simulated with the Monte Carlo method. Study results show that response points of up and bottom gamma-ray logs when the logging tool moves towards a highly radioactive formation can be used to predict the relative dip angle, and then the distance from the drilling bit to the boundary surface is calculated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Color Image Classification Using Block Matching and Learning

    NASA Astrophysics Data System (ADS)

    Kondo, Kazuki; Hotta, Seiji

    In this paper, we propose block matching and learning for color image classification. In our method, training images are partitioned into small blocks. Given a test image, it is also partitioned into small blocks, and mean-blocks corresponding to each test block are calculated with neighbor training blocks. Our method classifies a test image into the class that has the shortest total sum of distances between mean blocks and test ones. We also propose a learning method for reducing memory requirement. Experimental results show that our classification outperforms other classifiers such as support vector machine with bag of keypoints.

  18. The Hetu'u Global Network: Measuring the Distance to the Sun with the Transit of Venus

    NASA Astrophysics Data System (ADS)

    Rodriguez, David; Faherty, J.

    2013-01-01

    In the spirit of historic astronomical endeavors, we invited school groups across the globe to collaborate in a solar distance measurement using the 2012 transit of Venus. In total, our group (stationed at Easter Island, Chile) recruited 19 school groups spread over 6 continents and 10 countries to participate in our Hetu’u Global Network. Applying the methods of French astronomer Joseph-Nicolas Delisle, we used individual second and third Venus-Sun contact times to calculate the distance to the Sun. Ten of the sites in our network had amiable weather; 8 of which measured second contact and 5 of which measured third contact leading to consistent solar distance measurements of 152+/-30 million km and 163+/-30 million km respectively. The distance to the Sun at the time of the transit was 152.25 million km; therefore, our measurements are also consistent within 1-sigma of the known value. The goal of our international school group network was to inspire the next generation of scientists using the excitement and accessibility of such a rare astronomical event. In the process, we connected hundreds of participating students representing a diverse, multi-cultural group with differing political, economic, and racial backgrounds.

  19. Multivariate model of female black bear habitat use for a Geographic Information System

    USGS Publications Warehouse

    Clark, Joseph D.; Dunn, James E.; Smith, Kimberly G.

    1993-01-01

    Simple univariate statistical techniques may not adequately assess the multidimensional nature of habitats used by wildlife. Thus, we developed a multivariate method to model habitat-use potential using a set of female black bear (Ursus americanus) radio locations and habitat data consisting of forest cover type, elevation, slope, aspect, distance to roads, distance to streams, and forest cover type diversity score in the Ozark Mountains of Arkansas. The model is based on the Mahalanobis distance statistic coupled with Geographic Information System (GIS) technology. That statistic is a measure of dissimilarity and represents a standardized squared distance between a set of sample variates and an ideal based on the mean of variates associated with animal observations. Calculations were made with the GIS to produce a map containing Mahalanobis distance values within each cell on a 60- × 60-m grid. The model identified areas of high habitat use potential that could not otherwise be identified by independent perusal of any single map layer. This technique avoids many pitfalls that commonly affect typical multivariate analyses of habitat use and is a useful tool for habitat manipulation or mitigation to favor terrestrial vertebrates that use habitats on a landscape scale.

  20. Interpretation of Gamma Index for Quality Assurance of Simultaneously Integrated Boost (SIB) IMRT Plans for Head and Neck Carcinoma

    NASA Astrophysics Data System (ADS)

    Atiq, Maria; Atiq, Atia; Iqbal, Khalid; Shamsi, Quratul ain; Andleeb, Farah; Buzdar, Saeed Ahmad

    2017-12-01

    Objective: The Gamma Index is prerequisite to estimate point-by-point difference between measured and calculated dose distribution in terms of both Distance to Agreement (DTA) and Dose Difference (DD). This study aims to inquire what percentage of pixels passing a certain criteria assure a good quality plan and suggest gamma index as efficient mechanism for dose verification of Simultaneous Integrated Boost Intensity Modulated Radiotherapy plans. Method: In this study, dose was calculated for 14 head and neck patients and IMRT Quality Assurance was performed with portal dosimetry using the Eclipse treatment planning system. Eclipse software has a Gamma analysis function to compare measured and calculated dose distribution. Plans of this study were deemed acceptable when passing rate was 95% using tolerance for Distance to agreement (DTA) as 3mm and Dose Difference (DD) as 5%. Result and Conclusion: Thirteen cases pass tolerance criteria of 95% set by our institution. Confidence Limit for DD is 9.3% and for gamma criteria our local CL came out to be 2.0% (i.e., 98.0% passing). Lack of correlation was found between DD and γ passing rate with R2 of 0.0509. Our findings underline the importance of gamma analysis method to predict the quality of dose calculation. Passing rate of 95% is achieved in 93% of cases which is adequate level of accuracy for analyzed plans thus assuring the robustness of SIB IMRT treatment technique. This study can be extended to investigate gamma criteria of 5%/3mm for different tumor localities and to explore confidence limit on target volumes of small extent and simple geometry.

  1. Shape and structure of N=Z ^64Ge; Electromagnetic transition rates from the application of the Recoil Distance Method to knock-out reactions.

    NASA Astrophysics Data System (ADS)

    Starosta, K.; Dewald, A.

    2007-04-01

    Transition rate measurements are reported for the 2^+1 and 2^+2 states in the N=Z nucleus ^64Ge. The measurement was done utilizing the Recoil Distance Method (RDM) and a unique combination of state of the art instruments at the National Superconducting Cyclotron Laboratory (NSCL). States of interest were populated via an intermediate energy single neutron knock-out reaction. RDM studies of knock-out and fragmentation reaction products hold the promise of reaching far from stability and providing lifetime information for intermediate-spin excited states in a wide range of exotic nuclei. The large-scale Shell Model calculations applying the recently developed GXPF1A interaction are in excellent agreement with the above results. Theoretical analysis suggests that ^64Ge is a collective γ-soft anharmonic vibrator.

  2. A switching formation strategy for obstacle avoidance of a multi-robot system based on robot priority model.

    PubMed

    Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu

    2015-05-01

    This paper describes a switching formation strategy for multi-robots with velocity constraints to avoid and cross obstacles. In the strategy, a leader robot plans a safe path using the geometric obstacle avoidance control method (GOACM). By calculating new desired distances and bearing angles with the leader robot, the follower robots switch into a safe formation. With considering collision avoidance, a novel robot priority model, based on the desired distance and bearing angle between the leader and follower robots, is designed during the obstacle avoidance process. The adaptive tracking control algorithm guarantees that the trajectory and velocity tracking errors converge to zero. To demonstrate the validity of the proposed methods, simulation and experiment results present that multi-robots effectively form and switch formation avoiding obstacles without collisions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Analytical Wave Functions for Ultracold Collisions.

    NASA Astrophysics Data System (ADS)

    Cavagnero, M. J.

    1998-05-01

    Secular perturbation theory of long-range interactions(M. J. Cavagnero, PRA 50) 2841, (1994). has been generalized to yield accurate wave functions for near threshold processes, including low-energy scattering processes of interest at ultracold temperatures. In particular, solutions of Schrödinger's equation have been obtained for motion in the combined r-6, r-8, and r-10 potentials appropriate for describing an utlracold collision of two neutral ground state atoms. Scattering lengths and effective ranges appropriate to such potentials are readily calculated at distances comparable to the LeRoy radius, where exchange forces can be neglected, thereby eliminating the need to integrate Schrödinger's equation to large internuclear distances. Our method yields accurate base pair solutions well beyond the energy range of effective range theories, making possible the application of multichannel quantum defect theory [MQDT] and R-matrix methods to the study of ultracold collisions.

  4. An empirical model to determine the hadronic resonance contributions \\overline{B}{} ^0 → \\overline{K}{} ^{*0} μ ^+ μ ^- to transitions

    NASA Astrophysics Data System (ADS)

    Blake, T.; Egede, U.; Owen, P.; Petridis, K. A.; Pomery, G.

    2018-06-01

    A method for analysing the hadronic resonance contributions in \\overline{B}{} ^0 → \\overline{K}{} ^{*0} μ ^+ μ ^- decays is presented. This method uses an empirical model that relies on measurements of the branching fractions and polarisation amplitudes of final states involving J^{PC}=1^{-} resonances, relative to the short-distance component, across the full dimuon mass spectrum of \\overline{B}{} ^0 → \\overline{K}{} ^{*0} μ ^+ μ ^- transitions. The model is in good agreement with existing calculations of hadronic non-local effects. The effect of this contribution to the angular observables is presented and it is demonstrated how the narrow resonances in the q^2 spectrum provide a dramatic enhancement to CP-violating effects in the short-distance amplitude. Finally, a study of the hadronic resonance effects on lepton universality ratios, R_{K^{(*)}}, in the presence of new physics is presented.

  5. Appropriate location of the nipple-areola complex in males.

    PubMed

    Shulman, O; Badani, E; Wolf, Y; Hauben, D J

    2001-08-01

    Gynecomastia is a common deformity encountered by plastic surgeons. The appropriate location of the nipple-areola complex is a major determinant of the aesthetic success of the procedure. To study the natural location of the nipple-areola complex in the normally built male, 50 nonobese men with no evidence of gynecomastia and an average age of 27.9 years were examined. Three ratios were calculated and found to be relatively constant; they were the ratio between the height of the nipple and the height of the patient, the ratio between the distance between the nipples and chest circumference, and the ratio between the suprasternal notch-to-nipple distance and the height of the patient. Using these three parameters, a method of locating the nipple-areola complex on the male chest wall was devised. The method is advocated as a reliable, simple, and useful technique.

  6. High precision UTDR measurements by sonic velocity compensation with reference transducer.

    PubMed

    Stade, Sam; Kallioinen, Mari; Mänttäri, Mika; Tuuva, Tuure

    2014-07-02

    An ultrasonic sensor design with sonic velocity compensation is developed to improve the accuracy of distance measurement in membrane modules. High accuracy real-time distance measurements are needed in membrane fouling and compaction studies. The benefits of the sonic velocity compensation with a reference transducer are compared to the sonic velocity calculated with the measured temperature and pressure using the model by Belogol'skii, Sekoyan et al. In the experiments the temperature was changed from 25 to 60 °C at pressures of 0.1, 0.3 and 0.5 MPa. The set measurement distance was 17.8 mm. Distance measurements with sonic velocity compensation were over ten times more accurate than the ones calculated based on the model. Using the reference transducer measured sonic velocity, the standard deviations for the distance measurements varied from 0.6 to 2.0 µm, while using the calculated sonic velocity the standard deviations were 21-39 µm. In industrial liquors, not only the temperature and the pressure, which were studied in this paper, but also the properties of the filtered solution, such as solute concentration, density, viscosity, etc., may vary greatly, leading to inaccuracy in the use of the Belogol'skii, Sekoyan et al. model. Therefore, calibration of the sonic velocity with reference transducers is needed for accurate distance measurements.

  7. Methods for computing color anaglyphs

    NASA Astrophysics Data System (ADS)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  8. Low dose out-of-field radiotherapy, part 2: Calculating the mean photon energy values for the out-of-field photon energy spectrum from scattered radiation using Monte Carlo methods.

    PubMed

    Skrobala, A; Adamczyk, S; Kruszyna-Mochalska, M; Skórska, M; Konefał, A; Suchorska, W; Zaleska, K; Kowalik, A; Jackowiak, W; Malicki, J

    2017-08-01

    During radiotherapy, leakage from the machine head and collimator expose patients to out-of-field irradiation doses, which may cause secondary cancers. To quantify the risks of secondary cancers due to out-of-field doses, it is first necessary to measure these doses. Since most dosimeters are energy-dependent, it is essential to first determine the type of photon energy spectrum in the out-of-field area. The aim of this study was to determine the mean photon energy values for the out-of-field photon energy spectrum for a 6 MV photon beam using the GEANT 4-Monte Carlo method. A specially-designed large water phantom was simulated with a static field at gantry 0°. The source-to-surface distance was 92cm for an open field size of 10×10cm2. The photon energy spectra were calculated at five unique positions (at depths of 0.5, 1.6, 4, 6, 8, and 10cm) along the central beam axis and at six different off-axis distances. Monte Carlo simulations showed that mean radiation energy levels drop rapidly beyond the edge of the 6 MV photon beam field: at a distance of 10cm, the mean energy level is close to 0.3MeV versus 1.5MeV at the central beam axis. In some cases, the energy level actually increased even as the distance from the field edge increased: at a depth of 1.6cm and 15cm off-axis, the mean energy level was 0.205MeV versus 0.252MeV at 20cm off-axis. The out-of-field energy spectra and dose distribution data obtained in this study with Monte Carlo methods can be used to calibrate dosimeters to measure out-of-field radiation from 6MV photons. Copyright © 2017 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  9. Genetic variability in Brazilian wheat cultivars assessed by microsatellite markers

    PubMed Central

    2009-01-01

    Wheat (Triticum aestivum) is one of the most important food staples in the south of Brazil. Understanding genetic variability among the assortment of Brazilian wheat is important for breeding. The aim of this work was to molecularly characterize the thirty-six wheat cultivars recommended for various regions of Brazil, and to assess mutual genetic distances, through the use of microsatellite markers. Twenty three polymorphic microsatellite markers (PMM) delineated all 36 of the samples, revealing a total of 74 simple sequence repeat (SSR) alleles, i.e. an average of 3.2 alleles per locus. Polymorphic information content (PIC value) calculated to assess the informativeness of each marker ranged from 0.20 to 0.79, with a mean of 0.49. Genetic distances among the 36 cultivars ranged from 0.10 (between cultivars Ocepar 18 and BRS 207) to 0.88 (between cultivars CD 101 and Fudancep 46), the mean distance being 0.48. Twelve groups were obtained by using the unweighted pair-group method with arithmetic means analysis (UPGMA), and thirteen through the Tocher method. Both methods produced similar clusters, with one to thirteen cultivars per group. The results indicate that these tools may be used to protect intellectual property and for breeding and selection programs. PMID:21637519

  10. Biophysical influence of coumarin 35 on bovine serum albumin: Spectroscopic study

    NASA Astrophysics Data System (ADS)

    Bayraktutan, Tuğba; Onganer, Yavuz

    2017-01-01

    The binding mechanism and protein-fluorescence probe interactions between bovine serum albumin (BSA) and coumarin 35 (C35) was investigated by using UV-Vis absorption and fluorescence spectroscopies since they remain major research topics in biophysics. The spectroscopic data indicated that a fluorescence quenching process for BSA-C35 system was occurred. The fluorescence quenching processes were analyzed using Stern-Volmer method. In this regard, Stern-Volmer quenching constants (KSV) and binding constants were calculated at different temperatures. The distance r between BSA (donor) and C35 (acceptor) was determined by exploiting fluorescence resonance energy transfer (FRET) method. Synchronous fluorescence spectra were also studied to observe information about conformational changes. Moreover, thermodynamics parameters were calculated for better understanding of interactions and conformational changes of the system.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fedorovich, S V; Protsenko, I E

    We report the results of numerical modelling of emission of a two-level atom near a metal nanoparticle under resonant interaction of light with plasmon modes of the particle. Calculations have been performed for different polarisations of light by a dipole approximation method and a complex multipole method. Depending on the distance between a particle and an atom, the contribution of the nonradiative process of electron tunnelling from a two-level atom into a particle, which is calculated using the quasi-classical approximation, has been taken into account and assessed. We have studied spherical gold and silver particles of different diameters (10 –more » 100 nm). The rates of electron tunnelling and of spontaneous decay of the excited atomic state are found. The results can be used to develop nanoscale plasmonic emitters, lasers and photodetectors. (nanooptics)« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senesi, Andrew; Lee, Byeongdu

    Herein, a general method to calculate the scattering functions of polyhedra, including both regular and semi-regular polyhedra, is presented. These calculations may be achieved by breaking a polyhedron into sets of congruent pieces, thereby reducing computation time by taking advantage of Fourier transforms and inversion symmetry. Each piece belonging to a set or subunit can be generated by either rotation or translation. Further, general strategies to compute truncated, concave and stellated polyhedra are provided. Using this method, the asymptotic behaviors of the polyhedral scattering functions are compared with that of a sphere. It is shown that, for a regular polyhedron,more » the form factor oscillation at highqis correlated with the face-to-face distance. In addition, polydispersity affects the Porod constant. The ideas presented herein will be important for the characterization of nanomaterials using small-angle scattering.« less

  13. Non-contact measurement of pulse wave velocity using RGB cameras

    NASA Astrophysics Data System (ADS)

    Nakano, Kazuya; Aoki, Yuta; Satoh, Ryota; Hoshi, Akira; Suzuki, Hiroyuki; Nishidate, Izumi

    2016-03-01

    Non-contact measurement of pulse wave velocity (PWV) using red, green, and blue (RGB) digital color images is proposed. Generally, PWV is used as the index of arteriosclerosis. In our method, changes in blood volume are calculated based on changes in the color information, and is estimated by combining multiple regression analysis (MRA) with a Monte Carlo simulation (MCS) model of the transit of light in human skin. After two pulse waves of human skins were measured using RGB cameras, and the PWV was calculated from the difference of the pulse transit time and the distance between two measurement points. The measured forehead-finger PWV (ffPWV) was on the order of m/s and became faster as the values of vital signs raised. These results demonstrated the feasibility of this method.

  14. Acceleration of intensity-modulated radiotherapy dose calculation by importance sampling of the calculation matrices.

    PubMed

    Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas

    2002-05-01

    In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan.

  15. Computer-implemented method and apparatus for autonomous position determination using magnetic field data

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor A. (Inventor)

    2000-01-01

    A computer-implemented method and apparatus for determining position of a vehicle within 100 km autonomously from magnetic field measurements and attitude data without a priori knowledge of position. An inverted dipole solution of two possible position solutions for each measurement of magnetic field data are deterministically calculated by a program controlled processor solving the inverted first order spherical harmonic representation of the geomagnetic field for two unit position vectors 180 degrees apart and a vehicle distance from the center of the earth. Correction schemes such as a successive substitutions and a Newton-Raphson method are applied to each dipole. The two position solutions for each measurement are saved separately. Velocity vectors for the position solutions are calculated so that a total energy difference for each of the two resultant position paths is computed. The position path with the smaller absolute total energy difference is chosen as the true position path of the vehicle.

  16. Research on response spectrum of dam based on scenario earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoliang; Zhang, Yushan

    2017-10-01

    Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.

  17. Human Migration Patterns in Yemen and Implications for Reconstructing Prehistoric Population Movements

    PubMed Central

    Miró-Herrans, Aida T.; Al-Meeri, Ali; Mulligan, Connie J.

    2014-01-01

    Population migration has played an important role in human evolutionary history and in the patterning of human genetic variation. A deeper and empirically-based understanding of human migration dynamics is needed in order to interpret genetic and archaeological evidence and to accurately reconstruct the prehistoric processes that comprise human evolutionary history. Current empirical estimates of migration include either short time frames (i.e. within one generation) or partial knowledge about migration, such as proportion of migrants or distance of migration. An analysis of migration that includes both proportion of migrants and distance, and direction over multiple generations would better inform prehistoric reconstructions. To evaluate human migration, we use GPS coordinates from the place of residence of the Yemeni individuals sampled in our study, their birthplaces and their parents' and grandparents' birthplaces to calculate the proportion of migrants, as well as the distance and direction of migration events between each generation. We test for differences in these values between the generations and identify factors that influence the probability of migration. Our results show that the proportion and distance of migration between females and males is similar within generations. In contrast, the proportion and distance of migration is significantly lower in the grandparents' generation, most likely reflecting the decreasing effect of technology. Based on our results, we calculate the proportion of migration events (0.102) and mean and median distances of migration (96 km and 26 km) for the grandparent's generation to represent early times in human evolution. These estimates can serve to set parameter values of demographic models in model-based methods of prehistoric reconstruction, such as approximate Bayesian computation. Our study provides the first empirically-based estimates of human migration over multiple generations in a developing country and these estimates are intended to enable more precise reconstruction of the demographic processes that characterized human evolution. PMID:24759992

  18. Enhancing multi-view autostereoscopic displays by viewing distance control (VDC)

    NASA Astrophysics Data System (ADS)

    Jurk, Silvio; Duckstein, Bernd; Renault, Sylvain; Kuhlmey, Mathias; de la Barré, René; Ebner, Thomas

    2014-03-01

    Conventional multi-view displays spatially interlace various views of a 3D scene and form appropriate viewing channels. However, they only support sufficient stereo quality within a limited range around the nominal viewing distance (NVD). If this distance is maintained, two slightly divergent views are projected to the person's eyes, both covering the entire screen. With increasing deviations from the NVD the stereo image quality decreases. As a major drawback in usability, the manufacturer so far assigns this distance. We propose a software-based solution that corrects false view assignments depending on the distance of the viewer. Our novel approach enables continuous view adaptation based on the calculation of intermediate views and a column-bycolumn rendering method. The algorithm controls each individual subpixel and generates a new interleaving pattern from selected views. In addition, we use color-coded test content to verify its efficacy. This novel technology helps shifting the physically determined NVD to a user-defined distance thereby supporting stereopsis. The recent viewing positions can fall in front or behind the NVD of the original setup. Our algorithm can be applied to all multi-view autostereoscopic displays — independent of the ascent or the periodicity of the optical element. In general, the viewing distance can be corrected with a factor of more than 2.5. By creating a continuous viewing area the visualized 3D content is suitable even for persons with largely divergent intraocular distance — adults and children alike — without any deficiency in spatial perception.

  19. Hyperspectral Image Denoising Using a Nonlocal Spectral Spatial Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Li, D.; Xu, L.; Peng, J.; Ma, J.

    2018-04-01

    Hyperspectral images (HSIs) denoising is a critical research area in image processing duo to its importance in improving the quality of HSIs, which has a negative impact on object detection and classification and so on. In this paper, we develop a noise reduction method based on principal component analysis (PCA) for hyperspectral imagery, which is dependent on the assumption that the noise can be removed by selecting the leading principal components. The main contribution of paper is to introduce the spectral spatial structure and nonlocal similarity of the HSIs into the PCA denoising model. PCA with spectral spatial structure can exploit spectral correlation and spatial correlation of HSI by using 3D blocks instead of 2D patches. Nonlocal similarity means the similarity between the referenced pixel and other pixels in nonlocal area, where Mahalanobis distance algorithm is used to estimate the spatial spectral similarity by calculating the distance in 3D blocks. The proposed method is tested on both simulated and real hyperspectral images, the results demonstrate that the proposed method is superior to several other popular methods in HSI denoising.

  20. On the safety assessment of human exposure in the proximity of cellular communications base-station antennas at 900, 1800 and 2170 MHz.

    PubMed

    Martínez-Búrdalo, M; Martín, A; Anguiano, M; Villar, R

    2005-09-07

    In this work, the procedures for safety assessment in the close proximity of cellular communications base-station antennas at three different frequencies (900, 1800 and 2170 MHz) are analysed. For each operating frequency, we have obtained and compared the distances to the antenna from the exposure places where electromagnetic fields are below reference levels and the distances where the specific absorption rate (SAR) values in an exposed person are below the basic restrictions, according to the European safety guidelines. A high-resolution human body model has been located, in front of each base-station antenna as a worst case, at different distances, to compute whole body averaged SAR and maximum 10 g averaged SAR inside the exposed body. The finite-difference time-domain method has been used for both electromagnetic fields and SAR calculations. This paper shows that, for antenna-body distances in the near zone of the antenna, the fact that averaged field values be below the reference levels could, at certain frequencies, not guarantee guidelines compliance based on basic restrictions.

  1. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  2. Introducing Hurst exponent in pair trading

    NASA Astrophysics Data System (ADS)

    Ramos-Requena, J. P.; Trinidad-Segovia, J. E.; Sánchez-Granero, M. A.

    2017-12-01

    In this paper we introduce a new methodology for pair trading. This new method is based on the calculation of the Hurst exponent of a pair. Our approach is inspired by the classical concepts of co-integration and mean reversion but joined under a unique strategy. We will show how Hurst approach presents better results than classical Distance Method and Correlation strategies in different scenarios. Results obtained prove that this new methodology is consistent and suitable by reducing the drawdown of trading over the classical ones getting as a result a better performance.

  3. Analytical RISM-MP2 free energy gradient method: Application to the Schlenk equilibrium of Grignard reagent

    NASA Astrophysics Data System (ADS)

    Mori, Toshifumi; Kato, Shigeki

    2007-03-01

    We present a method to evaluate the analytical gradient of reference interaction site model Møller-Plesset second order free energy with respect to solute nuclear coordinates. It is applied to calculate the geometries and energies in the equilibria of the Grignard reagent (CH 3MgCl) in dimethylether solvent. The Mg-Mg and Mg-Cl distances as well as the binding energies of solvents are largely affected by the dynamical electron correlation. The solvent effect on the Schlenk equilibrium is examined.

  4. Gaussian-Beam/Physical-Optics Design Of Beam Waveguide

    NASA Technical Reports Server (NTRS)

    Veruttipong, Watt; Chen, Jacqueline C.; Bathker, Dan A.

    1993-01-01

    In iterative method of designing wideband beam-waveguide feed for paraboloidal-reflector antenna, Gaussian-beam approximation alternated with more nearly exact physical-optics analysis of diffraction. Includes curved and straight reflectors guiding radiation from feed horn to subreflector. For iterative design calculations, curved mirrors mathematically modeled as thin lenses. Each distance Li is combined length of two straight-line segments intersecting at one of flat mirrors. Method useful for designing beam-waveguide reflectors or mirrors required to have diameters approximately less than 30 wavelengths at one or more intended operating frequencies.

  5. A training image evaluation and selection method based on minimum data event distance for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke

    2017-07-01

    A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.

  6. Spin-dependent evolution of collectivity in 112Te

    NASA Astrophysics Data System (ADS)

    Doncel, M.; Bäck, T.; Qi, C.; Cullen, D. M.; Hodge, D.; Cederwall, B.; Taylor, M. J.; Procter, M.; Giles, M.; Auranen, K.; Grahn, T.; Greenlees, P. T.; Jakobsson, U.; Julin, R.; Juutinen, S.; HerzáÅ, A.; Konki, J.; Pakarinen, J.; Partanen, J.; Peura, P.; Rahkila, P.; Ruotsalainen, P.; Sandzelius, M.; Sarén, J.; Scholey, C.; Sorri, J.; Stolze, S.; Uusitalo, J.

    2017-11-01

    The evolution of collectivity with spin along the yrast line in the neutron-deficient nucleus 112Te has been studied by measuring the reduced transition probability of excited states in the yrast band. In particular, the lifetimes of the 4+ and 6+ excited states have been determined by using the recoil distance Doppler-shift method. The results are discussed using both large-scale shell-model and total Routhian surface calculations.

  7. Visible Signatures of Hypersonic Reentry

    DTIC Science & Technology

    2013-02-01

    cases, these viewing zones extend a significant distance from the impact location and/or include the impact location for a potentially significant...period of time before impact . Nomenclature V = reentry body velocity [m/s] ρ = ambient air density [kg/m3] ρ0 = sea-level air density [kg/m3] φ...time from first noticeability to impact . IV. Conclusion For a given reentry body, methods in this paper allow calculation of noticeability and

  8. Grain size effect on Lcr elastic wave for surface stress measurement of carbon steel

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Miao, Wenbing; Dong, Shiyun; He, Peng

    2018-04-01

    Based on critical refraction longitudinal wave (Lcr wave) acoustoelastic theory, correction method for grain size effect on surface stress measurement was discussed in this paper. Two fixed distance Lcr wave transducers were used to collect Lcr wave, and difference in time of flight between Lcr waves was calculated with cross-correlation coefficient function, at last relationship of Lcr wave acoustoelastic coefficient and grain size was obtained. Results show that as grain size increases, propagation velocity of Lcr wave decreases, one cycle is optimal step length for calculating difference in time of flight between Lcr wave. When stress value is within stress turning point, relationship of difference in time of flight between Lcr wave and stress is basically consistent with Lcr wave acoustoelastic theory, while there is a deviation and it is higher gradually as stress increasing. Inhomogeneous elastic plastic deformation because of inhomogeneous microstructure and average value of surface stress in a fixed distance measured with Lcr wave were considered as the two main reasons for above results. As grain size increasing, Lcr wave acoustoelastic coefficient decreases in the form of power function, then correction method for grain size effect on surface stress measurement was proposed. Finally, theoretical discussion was verified by fracture morphology observation.

  9. Direct Prediction of Cricondentherm and Cricondenbar Coordinates of Natural Gas Mixtures using Cubic Equation of State

    NASA Astrophysics Data System (ADS)

    Taraf, R.; Behbahani, R.; Moshfeghian, Mahmood

    2008-12-01

    A numerical algorithm is presented for direct calculation of the cricondenbar and cricondentherm coordinates of natural gas mixtures of known composition based on the Michelsen method. In the course of determination of these coordinates, the equilibrium mole fractions at these points are also calculated. In this algorithm, the property of the distance from the free energy surfaces to a tangent plane in equilibrium condition is added to saturation calculation as an additional criterion. An equation of state (EoS) was needed to calculate all required properties. Therefore, the algorithm was tested with Soave-Redlich-Kwong (SRK), Peng-Robinson (PR), and modified Nasrifar-Moshfeghian (MNM) equations of state. For different EoSs, the impact of the binary interaction coefficient ( k ij) was studied. The impact of initial guesses for temperature and pressure was also studied. The convergence speed and the accuracy of the results of this new algorithm were compared with experimental data and the results obtained from other methods and simulation softwares such as Hysys, Aspen Plus, and EzThermo.

  10. Estimation of Infiltration Parameters and the Irrigation Coefficients with the Surface Irrigation Advance Distance

    PubMed Central

    Beibei, Zhou; Quanjiu, Wang; Shuai, Tan

    2014-01-01

    A theory based on Manning roughness equation, Philip equation and water balance equation was developed which only employed the advance distance in the calculation of the infiltration parameters and irrigation coefficients in both the border irrigation and the surge irrigation. The improved procedure was validated with both the border irrigation and surge irrigation experiments. The main results are shown as follows. Infiltration parameters of the Philip equation could be calculated accurately only using water advance distance in the irrigation process comparing to the experimental data. With the calculated parameters and the water balance equation, the irrigation coefficients were also estimated. The water advance velocity should be measured at about 0.5 m to 1.0 m far from the water advance in the experimental corn fields. PMID:25061664

  11. Proof-of-concept of a laser mounted endoscope for touch-less navigated procedures

    PubMed Central

    Kral, Florian; Gueler, Oezguer; Perwoeg, Martina; Bardosi, Zoltan; Puschban, Elisabeth J; Riechelmann, Herbert; Freysinger, Wolfgang

    2013-01-01

    Background and Objectives During navigated procedures a tracked pointing device is used to define target structures in the patient to visualize its position in a registered radiologic data set. When working with endoscopes in minimal invasive procedures, the target region is often difficult to reach and changing instruments is disturbing in a challenging, crucial moment of the procedure. We developed a device for touch less navigation during navigated endoscopic procedures. Materials and Methods A laser beam is delivered to the tip of a tracked endoscope angled to its axis. Thereby the position of the laser spot in the video-endoscopic images changes according to the distance between the tip of the endoscope and the target structure. A mathematical function is defined by a calibration process and is used to calculate the distance between the tip of the endoscope and the target. The tracked tip of the endoscope and the calculated distance is used to visualize the laser spot in the registered radiologic data set. Results In comparison to the tracked instrument, the touch less target definition with the laser spot yielded in an over and above error of 0.12 mm. The overall application error in this experimental setup with a plastic head was 0.61 ± 0.97 mm (95% CI −1.3 to +2.5 mm). Conclusion Integrating a laser in an endoscope and then calculating the distance to a target structure by image processing of the video endoscopic images is accurate. This technology eliminates the need for tracked probes intraoperatively and therefore allows navigation to be integrated seamlessly in clinical routine. However, it is an additional chain link in the sequence of computer-assisted surgery thus influencing the application error. Lasers Surg. Med. 45:377–382, 2013. © 2013 Wiley Periodicals, Inc. PMID:23737122

  12. Improving the treatment of coarse-grain electrostatics: CVCEL.

    PubMed

    Ceres, N; Lavery, R

    2015-12-28

    We propose an analytic approach for calculating the electrostatic energy of proteins or protein complexes in aqueous solution. This method, termed CVCEL (Circular Variance Continuum ELectrostatics), is fitted to Poisson calculations and is able to reproduce the corresponding energies for different choices of solute dielectric constant. CVCEL thus treats both solute charge interactions and charge self-energies, and it can also deal with salt solutions. Electrostatic damping notably depends on the degree of solvent exposure of the charges, quantified here in terms of circular variance, a measure that reflects the vectorial distribution of the neighbors around a given center. CVCEL energies can be calculated rapidly and have simple analytical derivatives. This approach avoids the need for calculating effective atomic volumes or Born radii. After describing how the method was developed, we present test results for coarse-grain proteins of different shapes and sizes, using different internal dielectric constants and different salt concentrations and also compare the results with those from simple distance-dependent models. We also show that the CVCEL approach can be used successfully to calculate the changes in electrostatic energy associated with changes in protein conformation or with protein-protein binding.

  13. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  14. The use of chemometrics to study multifunctional indole alkaloids from Psychotria nemorosa (Palicourea comb. nov.). Part I: Extraction and fractionation optimization based on metabolic profiling.

    PubMed

    Klein-Júnior, Luiz C; Viaene, Johan; Salton, Juliana; Koetz, Mariana; Gasper, André L; Henriques, Amélia T; Vander Heyden, Yvan

    2016-09-09

    Extraction methods evaluation to access plants metabolome is usually performed visually, lacking a truthful method of data handling. In the present study the major aim was developing reliable time- and solvent-saving extraction and fractionation methods to access alkaloid profiling of Psychotria nemorosa leaves. Ultrasound assisted extraction was selected as extraction method. Determined from a Fractional Factorial Design (FFD) approach, yield, sum of peak areas, and peak numbers were rather meaningless responses. However, Euclidean distance calculations between the UPLC-DAD metabolic profiles and the blank injection evidenced the extracts are highly diverse. Coupled with the calculation and plotting of effects per time point, it was possible to indicate thermolabile peaks. After screening, time and temperature were selected for optimization, while plant:solvent ratio was set at 1:50 (m/v), number of extractions at one and particle size at ≤180μm. From Central Composite Design (CCD) results modeling heights of important peaks, previously indicated by the FFD metabolic profile analysis, time was set at 65min and temperature at 45°C, thus avoiding degradation. For the fractionation step, a solid phase extraction method was optimized by a Box-Behnken Design (BBD) approach using the sum of peak areas as response. Sample concentration was consequently set at 150mg/mL, % acetonitrile in dichloromethane at 40% as eluting solvent, and eluting volume at 30mL. Summarized, the Euclidean distance and the metabolite profiles provided significant responses for accessing P. nemorosa alkaloids, allowing developing reliable extraction and fractionation methods, avoiding degradation and decreasing the required time and solvent volume. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. On the optimal selection of interpolation methods for groundwater contouring: An example of propagation of uncertainty regarding inter-aquifer exchange

    NASA Astrophysics Data System (ADS)

    Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico

    2017-11-01

    The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).

  16. High Astrometric Precision in the Calculation of the Coordinates of Orbiters in the GEO Ring

    NASA Astrophysics Data System (ADS)

    Lacruz, E.; Abad, C.; Downes, J. J.; Hernández-Pérez, F.; Casanova, D.; Tresaco, E.

    2018-04-01

    We present an astrometric method for the calculation of the positions of orbiters in the GEO ring with a high precision, through a rigorous astrometric treatment of observations with a 1-m class telescope, which are part of the CIDA survey of the GEO ring. We compute the distortion pattern to correct for the systematic errors introduced by the optics and electronics of the telescope, resulting in absolute mean errors of 0.16″ and 0.12″ in right ascension and declination, respectively. These correspond to ≍25 m at the mean distance of the GEO ring, and are thus good quality results.

  17. Advertisement call and genetic structure conservatism: good news for an endangered Neotropical frog

    PubMed Central

    Costa, William P.; Martins, Lucas B.; Nunes-de-Almeida, Carlos H. L.; Toledo, Luís Felipe

    2016-01-01

    Background: Many amphibian species are negatively affected by habitat change due to anthropogenic activities. Populations distributed over modified landscapes may be subject to local extinction or may be relegated to the remaining—likely isolated and possibly degraded—patches of available habitat. Isolation without gene flow could lead to variability in phenotypic traits owing to differences in local selective pressures such as environmental structure, microclimate, or site-specific species assemblages. Methods: Here, we tested the microevolution hypothesis by evaluating the acoustic parameters of 349 advertisement calls from 15 males from six populations of the endangered amphibian species Proceratophrys moratoi. In addition, we analyzed the genetic distances among populations and the genetic diversity with a haplotype network analysis. We performed cluster analysis on acoustic data based on the Bray-Curtis index of similarity, using the UPGMA method. We correlated acoustic dissimilarities (calculated by Euclidean distance) with geographical and genetic distances among populations. Results: Spectral traits of the advertisement call of P. moratoi presented lower coefficients of variation than did temporal traits, both within and among males. Cluster analyses placed individuals without congruence in population or geographical distance, but recovered the species topology in relation to sister species. The genetic distance among populations was low; it did not exceed 0.4% for the most distant populations, and was not correlated with acoustic distance. Discussion: Both acoustic features and genetic sequences are highly conserved, suggesting that populations could be connected by recent migrations, and that they are subject to stabilizing selective forces. Although further studies are required, these findings add to a growing body of literature suggesting that this species would be a good candidate for a reintroduction program without negative effects on communication or genetic impact. PMID:27190717

  18. Optical Coherence Tomography Scan Circle Location and Mean Retinal Nerve Fiber Layer Measurement Variability

    PubMed Central

    Gabriele, Michelle L.; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Townsend, Kelly A.; Kagemann, Larry; Wojtkowski, Maciej; Srinivasan, Vivek J.; Fujimoto, James G.; Duker, Jay S.; Schuman, Joel S.

    2009-01-01

    PURPOSE To investigate the effect on optical coherence tomography (OCT) retinal nerve fiber layer (RNFL) thickness measurements of varying the standard 3.4-mm-diameter circle location. METHODS The optic nerve head (ONH) region of 17 eyes of 17 healthy subjects was imaged with high-speed, ultrahigh-resolution OCT (hsUHR-OCT; 501 × 180 axial scans covering a 6 × 6-mm area; scan time, 3.84 seconds) for a comprehensive sampling. This method allows for systematic simulation of the variable circle placement effect. RNFL thickness was measured on this three-dimensional dataset by using a custom-designed software program. RNFL thickness was resampled along a 3.4-mm-diameter circle centered on the ONH, then along 3.4-mm circles shifted horizontally (x-shift), vertically (y-shift) and diagonally up to ±500 µm (at 100-µm intervals). Linear mixed-effects models were used to determine RNFL thickness as a function of the scan circle shift. A model for the distance between the two thickest measurements along the RNFL thickness circular profile (peak distance) was also calculated. RESULTS RNFL thickness tended to decrease with both positive and negative x- and y-shifts. The range of shifts that caused a decrease greater than the variability inherent to the commercial device was greater in both nasal and temporal quadrants than in the superior and inferior ones. The model for peak distance demonstrated that as the scan moves nasally, the RNFL peak distance increases, and as the circle moves temporally, the distance decreases. Vertical shifts had a minimal effect on peak distance. CONCLUSIONS The location of the OCT scan circle affects RNFL thickness measurements. Accurate registration of OCT scans is essential for measurement reproducibility and longitudinal examination (ClinicalTrials.gov number, NCT00286637). PMID:18515577

  19. Effect of soldering techniques and gap distance on tensile strength of soldered Ni-Cr alloy joint.

    PubMed

    Lee, Sang-Yeob; Lee, Jong-Hyuk

    2010-12-01

    The present study was intended to evaluate the effect of soldering techniques with infrared ray and gas torch under different gap distances (0.3 mm and 0.5 mm) on the tensile strength and surface porosity formation in Ni-Cr base metal alloy. Thirty five dumbbell shaped Ni-Cr alloy specimens were prepared and assigned to 5 groups according to the soldering method and the gap distance. For the soldering methods, gas torch (G group) and infrared ray (IR group) were compared and each group was subdivided by corresponding gap distance (0.3 mm: G3 and IR3, 0.5 mm: G5, IR5). Specimens of the experimental groups were sectioned in the middle with a diamond disk and embedded in solder blocks according to the predetermined distance. As a control group, 7 specimens were prepared without sectioning or soldering. After the soldering procedure, a tensile strength test was performed using universal testing machine at a crosshead speed 1 mm/min. The proportions of porosity on the fractured surface were calculated on the images acquired through the scanning electronic microscope. Every specimen of G3, G5, IR3 and IR5 was fractured on the solder joint area. However, there was no significant difference between the test groups (P > .05). There was a negative correlation between porosity formation and tensile strength in all the specimens in the test groups (P < .05). There was no significant difference in ultimate tensile strength of joints and porosity formations between the gas-oxygen torch soldering and infrared ray soldering technique or between the gap distance of 0.3 mm and 0.5 mm.

  20. QCD phenomenology of static sources and gluonic excitations at short distances

    NASA Astrophysics Data System (ADS)

    Bali, Gunnar S.; Pineda, Antonio

    2004-05-01

    New lattice data for the Πu and Σ-u potentials at short distances are presented. We compare perturbation theory to the lower static hybrid potentials and find good agreement at short distances, once the renormalon ambiguities are accounted for. We use the nonperturbatively determined continuum-limit static hybrid and ground state potentials at short distances to determine the gluelump energies. The result is consistent with an estimate obtained from the gluelump data at finite lattice spacings. For the lightest gluelump, we obtain ΛRSB(νf=2.5r-10)=[2.25±0.10(latt.)±0.21(th.)±0.08(ΛMS¯)]r-10 in the quenched approximation with r-10≈400 MeV. We show that, to quote sensible numbers for the absolute values of the gluelump energies, it is necessary to handle the singularities of the singlet and octet potentials in the Borel plane. We propose to subtract the renormalons of the short-distance matching coefficients, the potentials in this case. For the singlet potential the leading renormalon is already known and related to that of the pole mass; for the octet potential a new renormalon appears, which we approximately evaluate. We also apply our methods to heavy-light mesons in the static limit and from the lattice simulations available in the literature we obtain the quenched result Λ¯RS(νf=2.5r-10)=[1.17±0.08(latt.)±0.13(th.)±0.09(ΛMS¯)]r-10. We calculate mb,MS¯(mb,MS¯) and apply our methods to gluinonia whose dynamics are governed by the singlet potential between adjoint sources. We can exclude nonstandard linear short-distance contributions to the static potentials, with good accuracy.

  1. Lift calculations based on accepted wake models for animal flight are inconsistent and sensitive to vortex dynamics.

    PubMed

    Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David

    2016-12-06

    There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift generation with flapping wings.

  2. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  3. The use of trigonometry in bloodstain analysis.

    PubMed

    Makovický, Peter; Horáková, Petra; Slavík, Petr; Mošna, František; Pokorná, Olga

    2013-04-01

    Bloodstain pattern analysis (BPA) is a valid forensic method which belongs to the category of biological methods using trigonomic models. Despite its development through the years, the method has been re-formulated a standard one and globally used, recognized in standard sheets. This method permits exact analysis of the dynamic and characteristic properties of bloodstains after impact on surfaces such as floors, walls, and ceilings, various exterior and interior items, and clothes. It is also possible to determine the characteristics of blood from the outer part of the body. According to the presence of blood and its quantity, it is also possible to use this method for verification of reconstruction of criminal acts, while being tested for its validity with primary conditions of preserved and readable traces of blood. Even though this method is not considered as the major one or the only one information obtained in this way can be used for judicial. In our research, we tested the validity of this method in an experimental model using firearms. We compared measurements of the lengths of trajectory of impact and the height of the blood sprayed upwards from a distance of 1, 3, 5 and 10 meters. The experiment was based on two main presumptions. The first was the knowledge of the value of the distance and the angle of impact of the bloodstain, the second, the ability of the blood to reach a certain height and the angle of its impact. In accordance with trigonometric formulas, both the impact of the selected distance of drops of blood, and the height of the selected bloodstain could be determined without any verification of the flight trajectory and the distance of bloodstains. The results indicate that the method for these requirements differs from the real values, while increasing the measurements with the indicated spot of the shot. Aside from the unique values which were calculated, other results of the impact of the distance of drops of bloodstain were considered of lower value, and the values concerning the height of the bloods stains after the shot higher than real values. In spite of the lack of total accuracy, we recommend using this method widely and more often for investigation and verification of individual acts in criminal and forensic practice.

  4. Cone photoreceptor definition on adaptive optics retinal imaging

    PubMed Central

    Muthiah, Manickam Nick; Gias, Carlos; Chen, Fred Kuanfu; Zhong, Joe; McClelland, Zoe; Sallo, Ferenc B; Peto, Tunde; Coffey, Peter J; da Cruz, Lyndon

    2014-01-01

    Aims To quantitatively analyse cone photoreceptor matrices on images captured on an adaptive optics (AO) camera and assess their correlation to well-established parameters in the retinal histology literature. Methods High resolution retinal images were acquired from 10 healthy subjects, aged 20–35 years old, using an AO camera (rtx1, Imagine Eyes, France). Left eye images were captured at 5° of retinal eccentricity, temporal to the fovea for consistency. In three subjects, images were also acquired at 0, 2, 3, 5 and 7° retinal eccentricities. Cone photoreceptor density was calculated following manual and automated counting. Inter-photoreceptor distance was also calculated. Voronoi domain and power spectrum analyses were performed for all images. Results At 5° eccentricity, the cone density (cones/mm2 mean±SD) was 15.3±1.4×103 (automated) and 13.9±1.0×103 (manual) and the mean inter-photoreceptor distance was 8.6±0.4 μm. Cone density decreased and inter-photoreceptor distance increased with increasing retinal eccentricity from 2 to 7°. A regular hexagonal cone photoreceptor mosaic pattern was seen at 2, 3 and 5° of retinal eccentricity. Conclusions Imaging data acquired from the AO camera match cone density, intercone distance and show the known features of cone photoreceptor distribution in the pericentral retina as reported by histology, namely, decreasing density values from 2 to 7° of eccentricity and the hexagonal packing arrangement. This confirms that AO flood imaging provides reliable estimates of pericentral cone photoreceptor distribution in normal subjects. PMID:24729030

  5. A Semiautomated Method for Measuring the 3-Dimensional Fabric to Renal Artery Distances to Determine Endograft Position After Endovascular Aneurysm Repair.

    PubMed

    Schuurmann, Richte C L; Overeem, Simon P; Ouriel, Kenneth; Slump, Cornelis H; Jordan, William D; Muhs, Bart E; de Vries, Jean-Paul P M

    2017-10-01

    To report a methodology for 3-dimensional (3D) assessment of the stent-graft deployment accuracy after endovascular aneurysm repair (EVAR). A methodology was developed and validated to calculate the 3D distances between the endograft fabric and the renal arteries over the curve of the aorta. The shortest distance between one of the renal arteries and the fabric (SFD) and the distance from the contralateral renal artery to the fabric (CFD) were determined on the first postoperative computed tomography (CT) scan of 81 elective EVAR patients. The SFDs were subdivided into a target position (0-3 mm distal to the renal artery), high position (partially covering the renal artery), and low position (>3 mm distal to the renal artery). Data are reported as the median (interquartile range, IQR). Intra- and interobserver agreements for automatic and manual calculation of the SFD and CFD were excellent (ICC >0.892, p<0.001). The median SFD was 1.4 mm (IQR -0.9, 3.0) and the median CFD was 8.0 mm (IQR 3.9, 14.2). The target position was achieved in 44%, high position in 30%, and low position in 26% of the patients. The median slope of the endograft toward the higher renal artery was 2.5° (IQR -5.5°, 13.9°). The novel methodology using 3D CT reconstructions enables accurate evaluation of endograft position and slope within the proximal aortic neck. In this series, only 44% of endografts were placed within the target position with regard to the lowermost renal artery.

  6. Testing the Distance Scale of the Gaia TGAS Catalogue by the Kinematic Method

    NASA Astrophysics Data System (ADS)

    Bobylev, V. V.; Bajkova, A. T.

    2018-03-01

    We have studied the simultaneous and separate solutions of the basic kinematic equations obtained using the stellar velocities calculated on the basis of data from the Gaia TGAS and RAVE5 catalogues. By comparing the values of Ω'0 found by separately analyzing only the line-of-sight velocities of stars and only their proper motions, we have determined the distance scale correction factor p to be close to unity, 0.97 ± 0.04. Based on the proper motions of stars from the Gaia TGAS catalogue with relative trigonometric parallax errors less than 10% (they are at a mean distance of 226 pc), we have found the components of the group velocity vector for the sample stars relative to the Sun ( U, V, W)⊙ = (9.28, 20.35, 7.36) ± (0.05, 0.07, 0.05) km s-1, the angular velocity of Galactic rotation Ω0 = 27.24 ± 0.30 km s-1 kpc-1, and its first derivative Ω'0 = -3.77 ± 0.06 km s-1 kpc-2; here, the circular rotation velocity of the Sun around the Galactic center is V 0 = 218 ± 6 km s-1 kpc (for the adopted distance R 0 = 8.0 ± 0.2 kpc), while the Oort constants are A = 15.07 ± 0.25 km s-1 kpc-1 and B = -12.17 ± 0.39 km s-1 kpc-1, p = 0.98 ± 0.08. The kinematics of Gaia TGAS stars with parallax errors more than 10% has been studied by invoking the distances from a paper by Astraatmadja and Bailer-Jones that were corrected for the Lutz-Kelker bias. We show that the second derivative of the angular velocity of Galactic rotation Ω'0 = 0.864 ± 0.021 km s-1 kpc-3 is well determined from stars at a mean distance of 537 pc. On the whole, we have found that the distances of stars from the Gaia TGAS catalogue calculated using their trigonometric parallaxes do not require any additional correction factor.

  7. Multi-Objective UAV Mission Planning Using Evolutionary Computation

    DTIC Science & Technology

    2008-03-01

    on a Solution Space. . . . . . . . . . . . . . . . . . . . 41 4.3. Crowding distance calculation. Dark points are non-dominated solutions. [14...SPEA2 was devel- oped by Zitzler [64] as an improvement to the original SPEA algorithm [65]. SPEA2 Figure 4.3: Crowding distance calculation. Dark ...thesis, Los Angeles, CA, USA, 2003. Adviser-Maja J. Mataric . 114 21. Homberger, Joerg and Hermann Gehring. “Two Evolutionary Metaheuristics for the

  8. The Molecular Structure of cis-FONO

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Dateo, Christopher E.; Rice, Julia E.; Langhoff, Stephen R. (Technical Monitor)

    1994-01-01

    The molecular structure of cis-FONO has been determined with the CCSD(T) correlation method using an spdf quality basis set. In agreement with previous coupled-cluster calculations but in disagreement with density functional theory, cis-FONO is found to exhibit normal bond distances. The quadratic and cubic force fields of cis-FONO have also been determined in order to evaluate the effect of vibrational averaging on the molecular geometry. Vibrational averaging is found to increase bond distances, as expected, but it does not affect the qualitative nature of the bonding. The CCSD(T)/spdf harmonic frequencies of cis-FONO support our previous assertion that a band observed at 1200 /cm is a combination band (upsilon(sub 3) + upsilon(sub 4)), and not a fundamental.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Li; Fuhrer, Tobias; Schaefer, Bastian

    Measuring similarities/dissimilarities between atomic structures is important for the exploration of potential energy landscapes. However, the cell vectors together with the coordinates of the atoms, which are generally used to describe periodic systems, are quantities not directly suitable as fingerprints to distinguish structures. Based on a characterization of the local environment of all atoms in a cell, we introduce crystal fingerprints that can be calculated easily and define configurational distances between crystalline structures that satisfy the mathematical properties of a metric. This distance between two configurations is a measure of their similarity/dissimilarity and it allows in particular to distinguish structures.more » The new method can be a useful tool within various energy landscape exploration schemes, such as minima hopping, random search, swarm intelligence algorithms, and high-throughput screenings.« less

  10. An ab initio study of the structure and atomic transport in bulk liquid Ag and its liquid-vapor interface

    NASA Astrophysics Data System (ADS)

    del Rio, Beatriz G.; González, David J.; González, Luis E.

    2016-10-01

    Several static and dynamic properties of bulk liquid Ag at a thermodynamic state near its triple point have been calculated by means of ab initio molecular dynamics simulations. The calculated static structure shows a very good agreement with the available experimental data. The dynamical structure reveals propagating excitations whose dispersion at long wavelengths is compatible with the experimental sound velocity. Results are also reported for other transport coefficients. Additional simulations have also been performed so as to study the structure of the free liquid surface. The calculated longitudinal ionic density profile shows an oscillatory behaviour, whose properties are analyzed through macroscopic and microscopic methods. The intrinsic X-ray reflectivity of the surface is predicted to show a layering peak associated to the interlayer distance.

  11. Comparison of SAR and induced current densities in adults and children exposed to electromagnetic fields from electronic article surveillance devices

    NASA Astrophysics Data System (ADS)

    Martínez-Búrdalo, M.; Sanchis, A.; Martín, A.; Villar, R.

    2010-02-01

    Electronic article surveillance (EAS) devices are widely used in most stores as anti-theft systems. In this work, the compliance with international guidelines in the human exposure to these devices is analysed by using the finite-difference time-domain (FDTD) method. Two sets of high resolution numerical phantoms of different size (REMCOM/Hershey and Virtual Family), simulating adult and child bodies, are exposed to a 10 MHz pass-by panel-type EAS consisting of two overlapping current-carrying coils. Two different relative positions between the EAS and the body (frontal and lateral exposures), which imply the exposure of different parts of the body at different distances, have been considered. In all cases, induced current densities in tissues of the central nervous system and specific absorption rates (SARs) are calculated to be compared with the limits from the guidelines. Results show that induced current densities are lower in the case of adult models as compared with those of children in both lateral and frontal exposures. Maximum SAR values calculated in lateral exposure are significantly lower than those calculated in frontal exposure, where the EAS-body distance is shorter. Nevertheless, in all studied cases, with an EAS driving current of 4 A rms, maximum induced current and SAR values are below basic restrictions.

  12. Comparison of SAR and induced current densities in adults and children exposed to electromagnetic fields from electronic article surveillance devices.

    PubMed

    Martínez-Búrdalo, M; Sanchis, A; Martín, A; Villar, R

    2010-02-21

    Electronic article surveillance (EAS) devices are widely used in most stores as anti-theft systems. In this work, the compliance with international guidelines in the human exposure to these devices is analysed by using the finite-difference time-domain (FDTD) method. Two sets of high resolution numerical phantoms of different size (REMCOM/Hershey and Virtual Family), simulating adult and child bodies, are exposed to a 10 MHz pass-by panel-type EAS consisting of two overlapping current-carrying coils. Two different relative positions between the EAS and the body (frontal and lateral exposures), which imply the exposure of different parts of the body at different distances, have been considered. In all cases, induced current densities in tissues of the central nervous system and specific absorption rates (SARs) are calculated to be compared with the limits from the guidelines. Results show that induced current densities are lower in the case of adult models as compared with those of children in both lateral and frontal exposures. Maximum SAR values calculated in lateral exposure are significantly lower than those calculated in frontal exposure, where the EAS-body distance is shorter. Nevertheless, in all studied cases, with an EAS driving current of 4 A rms, maximum induced current and SAR values are below basic restrictions.

  13. Interlayer interaction and mechanical properties in multi-layer graphene, Boron-Nitride, Aluminum-Nitride and Gallium-Nitride graphene-like structure: A quantum-mechanical DFT study

    NASA Astrophysics Data System (ADS)

    Ghorbanzadeh Ahangari, Morteza; Fereidoon, A.; Hamed Mashhadzadeh, Amin

    2017-12-01

    In present study, we investigated mechanical, electronic and interlayer properties of mono, bi and 3layer of Boron-Nitride (B-N), Aluminum-Nitride (Al-N) and Gallium-Nitride (Ga-N) graphene sheets and compared these results with results obtained from carbonic graphenes (C-graphenes). For reaching this purpose, first we optimized the geometrical parameters of these graphenes by using density functional theory (DFT) method. Then we calculated Young's modulus of graphene sheet by compressing and then elongating these sheets in small increment. Our results indicates that Young's modulus of graphenes didn't changed obviously by increasing the number of layer sheet. We also found that carbonic graphene has greatest Young's modulus among another mentioned sheets because of smallest equilibrium distance between its elements. Next we modeled the van der Waals interfacial interaction exist between two sheets with classical spring model by using general form of Lennard-Jones (L-J) potential for all of mentioned graphenes. For calculating L-J parameters (ε and σ), the potential energy between layers of mentioned graphene as a function of the separation distance was plotted. Moreover, the density of states (DOS) are calculated to understand the electronic properties of these systems better.

  14. Luminescence of Mn4+ ions in CaTiO3 and MgTiO3 perovskites: Relationship of experimental spectroscopic data and crystal field calculations

    NASA Astrophysics Data System (ADS)

    Đorđević, Vesna; Brik, Mikhail G.; Srivastava, Alok M.; Medić, Mina; Vulić, Predrag; Glais, Estelle; Viana, Bruno; Dramićanin, Miroslav D.

    2017-12-01

    Herein, the synthesis, structural and crystal field analysis and optical spectroscopy of Mn4+ doped metal titanates ATiO3 (A = Ca, Mg) are presented. Materials of desired phase were prepared by molten salt assisted sol-gel method in the powder form. Crystallographic data of samples were obtained by refinement of X-ray diffraction measurements. From experimental excitation and emission spectra and structural data, crystal field parameters and energy levels of Mn4+ in CaTiO3 and MgTiO3 were calculated by the exchange charge model of crystal-field theory. It is found that crystalline field strength is lower (Dq = 1831 cm-1) in the rhombohedral Ilmenite MgTiO3 structure due to the relatively longer average Mn4+sbnd O2- bond distance (2.059 Å), and higher (Dq = 2017 cm-1) in orthorhombic CaTiO3 which possess shorter average Mn4+sbnd O2- bond distance (1.956 Å). Spectral positions of the Mn4+2Eg → 4A2g transition maxima is 709 nm in MgTiO3 and 717 nm in CaTiO3 respectively in good agreement with calculated values.

  15. Landslide susceptibility mapping by combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process in Dozain basin

    NASA Astrophysics Data System (ADS)

    Tazik, E.; Jahantab, Z.; Bakhtiari, M.; Rezaei, A.; Kazem Alavipanah, S.

    2014-10-01

    Landslides are among the most important natural hazards that lead to modification of the environment. Therefore, studying of this phenomenon is so important in many areas. Because of the climate conditions, geologic, and geomorphologic characteristics of the region, the purpose of this study was landslide hazard assessment using Fuzzy Logic, frequency ratio and Analytical Hierarchy Process method in Dozein basin, Iran. At first, landslides occurred in Dozein basin were identified using aerial photos and field studies. The influenced landslide parameters that were used in this study including slope, aspect, elevation, lithology, precipitation, land cover, distance from fault, distance from road and distance from river were obtained from different sources and maps. Using these factors and the identified landslide, the fuzzy membership values were calculated by frequency ratio. Then to account for the importance of each of the factors in the landslide susceptibility, weights of each factor were determined based on questionnaire and AHP method. Finally, fuzzy map of each factor was multiplied to its weight that obtained using AHP method. At the end, for computing prediction accuracy, the produced map was verified by comparing to existing landslide locations. These results indicate that the combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process method are relatively good estimators of landslide susceptibility in the study area. According to landslide susceptibility map about 51% of the occurred landslide fall into the high and very high susceptibility zones of the landslide susceptibility map, but approximately 26 % of them indeed located in the low and very low susceptibility zones.

  16. Feature selection for the classification of traced neurons.

    PubMed

    López-Cabrera, José D; Lorenzo-Ginori, Juan V

    2018-06-01

    The great availability of computational tools to calculate the properties of traced neurons leads to the existence of many descriptors which allow the automated classification of neurons from these reconstructions. This situation determines the necessity to eliminate irrelevant features as well as making a selection of the most appropriate among them, in order to improve the quality of the classification obtained. The dataset used contains a total of 318 traced neurons, classified by human experts in 192 GABAergic interneurons and 126 pyramidal cells. The features were extracted by means of the L-measure software, which is one of the most used computational tools in neuroinformatics to quantify traced neurons. We review some current feature selection techniques as filter, wrapper, embedded and ensemble methods. The stability of the feature selection methods was measured. For the ensemble methods, several aggregation methods based on different metrics were applied to combine the subsets obtained during the feature selection process. The subsets obtained applying feature selection methods were evaluated using supervised classifiers, among which Random Forest, C4.5, SVM, Naïve Bayes, Knn, Decision Table and the Logistic classifier were used as classification algorithms. Feature selection methods of types filter, embedded, wrappers and ensembles were compared and the subsets returned were tested in classification tasks for different classification algorithms. L-measure features EucDistanceSD, PathDistanceSD, Branch_pathlengthAve, Branch_pathlengthSD and EucDistanceAve were present in more than 60% of the selected subsets which provides evidence about their importance in the classification of this neurons. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Blocked Force and Loading Calculations for LaRC THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.

    2007-01-01

    An analytic approach is developed to predict the performance of LaRC Thunder actuators under load and under blocked conditions. The problem is treated with the Von Karman non-linear analysis combined with a simple Raleigh-Ritz calculation. From this, shape and displacement under load combined with voltage are calculated. A method is found to calculate the blocked force vs voltage and spring force vs distance. It is found that under certain conditions, the blocked force and displacement is almost linear with voltage. It is also found that the spring force is multivalued and has at least one bifurcation point. This bifurcation point is where the device collapses under load and locks to a different bending solution. This occurs at a particular critical load. It is shown this other bending solution has a reduced amplitude and is proportional to the original amplitude times the square of the aspect ratio.

  18. Spatial accessibility to vaccination sites in a campaign against rabies in São Paulo city, Brazil.

    PubMed

    Polo, Gina; Acosta, Carlos Mera; Dias, Ricardo Augusto

    2013-08-01

    It is estimated that the city of São Paulo has over 2.5 million dogs and 560 thousand cats. These populations are irregularly distributed throughout the territory, making it difficult to appropriately allocate health services focused on these species. To reasonably allocate vaccination sites, it is necessary to identify social groups and their access to the referred service. Rabies in dogs and cats has been an important zoonotic health issue in São Paulo and the key component of rabies control is vaccination. The present study aims to introduce an approach to quantify the potential spatial accessibility to the vaccination sites of the 2009 campaign against rabies in the city of São Paulo and solve the overestimation associated with the classic methodology that applies buffer zones around vaccination sites based on Euclidean (straight-line) distance. To achieve this, a Gaussian-based two-step floating catchment area method with a travel-friction coefficient was adapted in a geographic information system environment, using distances along a street network based on Dijkstra's algorithm (short path method). The choice of the distance calculation method affected the results in terms of the population covered. In general, areas with low accessibility for both dogs and cats were observed, especially in densely populated areas. The eastern zone of the city had higher accessibility values compared with peripheral and central zones. The Gaussian-based two-step floating catchment method with a travel-friction coefficient was used to assess the overestimation of the straight-line distance method, which is the most widely used method for coverage analysis. We conclude that this approach has the potential to improve the efficiency of resource use when planning rabies control programs in large urban environments such as São Paulo. The findings emphasize the need for surveillance and intervention in isolated areas. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Euler solutions to nonlinear acoustics of non-lifting rotor blades

    NASA Technical Reports Server (NTRS)

    Baeder, J. D.

    1991-01-01

    For the first time a computational fluid dynamics (CFD) method is used to calculate directly the high-speed impulsive (HSI) noise of a non-lifting hovering rotor blade out to a distance of over three rotor radii. In order to accurately propagate the acoustic wave in a stable and efficient manner, an implicit upwind-biased Euler method is solved on a grid with points clustered along the line of propagation. A detailed validation of the code is performed for a rectangular rotor blade at tip Mach numbers ranging from 0.88 to 0.92. The agreement with experiment is excellent at both the sonic cylinder and at 2.18 rotor radii. The agreement at 3.09 rotor radii is still very good, showing improvements over the results from the best previous method. Grid sensitivity studies indicate that with special attention to the location of the boundaries a grid with approximately 60,000 points is adequate. This results in a computational time of approximately 40 minutes on a Cray-XMP. The practicality of the method to calculate HSI noise is demonstrated by expanding the scope of the investigation to examine the rectangular blade as well as a highly swept and tapered blade over a tip Mach number range of 0.80 to 0.95. Comparisons with experimental data are excellent and the advantages of planform modifications are clearly evident. New insight is gained into the mechanisms of nonlinear propagation and the minimum distance at which a valid comparison of different rotors can be made: approximately two rotor radii from the center of rotation.

  20. Euler solutions to nonlinear acoustics of non-lifting hovering rotor blades

    NASA Technical Reports Server (NTRS)

    Baeder, J. D.

    1991-01-01

    For the first time a computational fluid dynamics (CFD) method is used to calculate directly the high-speed impulsive (HSI) noise of a non-lifting hovering rotor blade out to a distance of over three rotor radii. In order to accurately propagate the acoustic wave in a stable and efficient manner, an implicit upwind-biased Euler method is solved on a grid with points clustered along the line of propagation. A detailed validation of the code is performed for a rectangular rotor blade at tip Mach numbers ranging from 0.88 to 0.92. The agreement with experiment is excellent at both the sonic cylinder and at 2.18 rotor radii. The agreement at 3.09 rotor radii is still very good, showing improvements over the results from the best previous method. Grid sensitivity studies indicate that with special attention to the location of the boundaries a grid with approximately 60,000 points is adequate. This results in a computational time of approximately 40 minutes on a Cray-XMP. The practicality of the method to calculate HSI noise is demonstrated by expanding the scope of the investigation to examine the rectangular blade as well as a highly swept and tapered blade over a tip Mach number range of 0.80 to 0.95. Comparisons with experimental data are excellent and the advantages of planform modifications are clearly evident. New insight is gained into the mechanisms of nonlinear propagation and the minimum distance at which a valid comparison of different rotors can be made: approximately two rotor radii from the center of rotation.

  1. The Araucaria project. The distance to the small Magellanic Cloud from late-type eclipsing binaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graczyk, Dariusz; Pietrzyński, Grzegorz; Gieren, Wolfgang

    2014-01-01

    We present a distance determination to the Small Magellanic Cloud (SMC) based on an analysis of four detached, long-period, late-type eclipsing binaries discovered by the Optical Gravitational Lensing Experiment (OGLE) survey. The components of the binaries show negligible intrinsic variability. A consistent set of stellar parameters was derived with low statistical and systematic uncertainty. The absolute dimensions of the stars are calculated with a precision of better than 3%. The surface brightness-infrared color relation was used to derive the distance to each binary. The four systems clump around a distance modulus of (m – M) = 18.99 with a dispersionmore » of only 0.05 mag. Combining these results with the distance published by Graczyk et al. for the eclipsing binary OGLE SMC113.3 4007, we obtain a mean distance modulus to the SMC of 18.965 ± 0.025 (stat.) ± 0.048 (syst.) mag. This corresponds to a distance of 62.1 ± 1.9 kpc, where the error includes both uncertainties. Taking into account other recent published determinations of the SMC distance we calculated the distance modulus difference between the SMC and the Large Magellanic Cloud equal to 0.458 ± 0.068 mag. Finally, we advocate μ{sub SMC} = 18.95 ± 0.07 as a new 'canonical' value of the distance modulus to this galaxy.« less

  2. Nontraditional method for determining unperturbed orbits of unknown space objects using incomplete optical observational data

    NASA Astrophysics Data System (ADS)

    Perov, N. I.

    1985-02-01

    A physical-geometrical method for computing the orbits of earth satellites on the basis of an inadequate number of angular observations (N3) was developed. Specifically, a new method has been developed for calculating the elements of Keplerian orbits of unidentified artificial satellites using two angular observations (alpha sub k, S sub k, k = 1). The first section gives procedures for determining the topocentric distance to AES on the basis of one optical observation. This is followed by description of a very simple method for determining unperturbed orbits using two satellite position vectors and a time interval which is applicable even in the case of antiparallel AED position vectors, a method designated the R sub 2 iterations method.

  3. Supramolecular structures for determination and identification of the bond lengths in novel uranyl complexes from their infrared spectra

    NASA Astrophysics Data System (ADS)

    El-Sonbati, A. Z.; Diab, M. A.; Morgan, Sh. M.; Seyam, H. A.

    2018-02-01

    Novel dioxouranium (VI) heterochelates with neutral bidentate compounds (Ln) have been synthesized. The ligands and the heterochelates [UO2(Ln)2(O2NO)2] were confirmed and characterized by elemental analysis, 1H NMR, UV.-Vis, IR, mass spectroscopy, X-ray diffraction and thermogravimetric analysis (TGA). IR spectral data suggest that the molecules of the Schiff base are coordinated to the central uranium atom (ON donor). The nitrato groups are coordinated as bidentate ligands. The thermodynamic parameters were calculated using Coats-Redfern and Horowitz-Metzger methods. The ligands (Ln) and their complexes (1-3) showed the υ3 frequency of UO22+ has been shown to be an excellent molecular probe for studying the coordinating power of the ligands. The values of υ3 of the prepared complexes containing UO22+ were successfully used to calculate the force constant, FUO (1n 10-8N/Å) and the bond length RUO (Å) of the Usbnd O bond. A strategy based upon both theoretical and experimental investigations has been adopted. The theoretical aspects are described in terms of the well-known theory of 5d-4f transitions. Wilson's, matrix method, Badger's formula, and Jones and El-Sonbati equations were used to calculate the Usbnd O bond distances from the values of the stretching and interaction force constants. The most probable correlation between Usbnd O force constant to Usbnd O bond distance were satisfactorily discussed in term of Badger's rule and the equations suggested by Jones and El-Sonbati. The effect of Hammett's constant is also discussed.

  4. [Study of the effect of heat source separation distance on plasma physical properties in laser-pulsed GMAW hybrid welding based on spectral diagnosis technique].

    PubMed

    Liao, Wei; Hua, Xue-Ming; Zhang, Wang; Li, Fang

    2014-05-01

    In the present paper, the authors calculated the plasma's peak electron temperatures under different heat source separation distance in laser- pulse GMAW hybrid welding based on Boltzmann spectrometry. Plasma's peak electron densities under the corresponding conditions were also calculated by using the Stark width of the plasma spectrum. Combined with high-speed photography, the effect of heat source separation distance on electron temperature and electron density was studied. The results show that with the increase in heat source separation distance, the electron temperatures and electron densities of laser plasma did not changed significantly. However, the electron temperatures of are plasma decreased, and the electron densities of are plasma first increased and then decreased.

  5. Ab initio Potential Energy Surface for H-H2

    NASA Technical Reports Server (NTRS)

    Partridge, Harry; Bauschlicher, Charles W., Jr.; Stallcop, James R.; Levin, Eugene

    1993-01-01

    Ab initio calculations employing large basis sets are performed to determine an accurate potential energy surface for H-H2 interactions for a broad range of separation distances. At large distances, the spherically averaged potential determined from the calculated energies agrees well with the corresponding results determined from dispersion coefficients; the van der Waals well depth is predicted to be 75 +/- (mu)E(sub h). Large basis sets have also been applied to reexamine the accuracy of theoretical repulsive potential energy surfaces. Multipolar expansions of the computed H-H2 potential energy surface are reported for four internuclear separation distances (1.2, 1.401, 1.449, and 1.7a(sub 0) of the hydrogen molecule. The differential elastic scattering cross section calculated from the present results is compared with the measurements from a crossed beam experiment.

  6. Carbon footprint of patient journeys through primary care: a mixed methods approach.

    PubMed

    Andrews, Elizabeth; Pearson, David; Kelly, Charlotte; Stroud, Laura; Rivas Perez, Martin

    2013-09-01

    The NHS has a target of cutting its carbon dioxide (CO2) emissions by 80% below 1990 levels by 2050. Travel comprises 17% of the NHS carbon footprint. This carbon footprint represents the total CO2 emissions caused directly or indirectly by the NHS. Patient journeys have previously been planned largely without regard to the environmental impact. The potential contribution of 'avoidable' journeys in primary care is significant. To investigate the carbon footprint of patients travelling to and from a general practice surgery, the issues involved, and potential solutions for reducing patient travel. A mixed methods study in a medium-sized practice in Yorkshire. During March 2012, 306 patients completed a travel survey. GIS maps of patients' travel (modes and distances) were produced. Two focus groups (12 clinical and 13 non-clinical staff) were recorded, transcribed, and analysed using a thematic framework approach. The majority (61%) of patient journeys to and from the surgery were made by car or taxi; main reasons cited were 'convenience', 'time saving', and 'no alternative' for accessing the surgery. Using distances calculated via ArcGIS, the annual estimated CO2 equivalent carbon emissions for the practice totalled approximately 63 tonnes. Predominant themes from interviews related to issues with systems for booking appointments and repeat prescriptions; alternative travel modes; delivering health care; and solutions to reducing travel. The modes and distances of patient travel can be accurately determined and allow appropriate carbon emission calculations for GP practices. Although challenging, there is scope for identifying potential solutions (for example, modifying administration systems and promoting walking) to reduce 'avoidable' journeys and cut carbon emissions while maintaining access to health care.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stathakis, S; Defoor, D; Linden, P

    Purpose: To study the frequency of Multi-Leaf Collimator (MLC) leaf failures, investigate methods to predict them and reduce linac downtime. Methods: A Varian HD120 MLC was used in our study. The hyperterminal MLC errors logged from 06/2012 to 12/2014 were collected. Along with the hyperterminal errors, the MLC motor changes and all other MLC interventions by the linear accelerator engineer were recorded. The MLC dynalog files were also recorded on a daily basis for each treatment and during linac QA. The dynalog files were analyzed to calculate root mean square errors (RMS) and cumulative MLC travel distance per motor. Anmore » in-house MatLab code was used to analyze all dynalog files, record RMS errors and calculate the distance each MLC traveled per day. Results: A total of 269 interventions were recorded over a period of 18 months. Of these, 146 included MLC motor leaf change, 39 T-nut replacements, and 84 MLC cleaning sessions. Leaves close to the middle of each side required the most maintenance. In the A bank, leaves A27 to A40 recorded 73% of all interventions, while the same leaves in the B bank counted for 52% of the interventions. On average, leaves in the middle of the bank had their motors changed approximately every 1500m of travel. Finally, it was found that the number of RMS errors increased prior to an MLC motor change. Conclusion: An MLC dynalog file analysis software was developed that can be used to log daily MLC usage. Our eighteen-month data analysis showed that there is a correlation between the distance an MLC travels, the RMS and the life of the MLC motor. We plan to use this tool to predict MLC motor failures and with proper and timely intervention, reduce the downtime of the linac during clinical hours.« less

  8. 3He NMR studies on helium-pyrrole, helium-indole, and helium-carbazole systems: a new tool for following chemistry of heterocyclic compounds.

    PubMed

    Radula-Janik, Klaudia; Kupka, Teobald

    2015-02-01

    The (3)He nuclear magnetic shieldings were calculated for free helium atom and He-pyrrole, He-indole, and He-carbazole complexes. Several levels of theory, including Hartree-Fock (HF), Second-order Møller-Plesset Perturbation Theory (MP2), and Density Functional Theory (DFT) (VSXC, M062X, APFD, BHandHLYP, and mPW1PW91), combined with polarization-consistent pcS-2 and aug-pcS-2 basis sets were employed. Gauge-including atomic orbital (GIAO) calculated (3)He nuclear magnetic shieldings reproduced accurately previously reported theoretical values for helium gas. (3)He nuclear magnetic shieldings and energy changes as result of single helium atom approaching to the five-membered ring of pyrrole, indole, and carbazole were tested. It was observed that (3)He NMR parameters of single helium atom, calculated at various levels of theory (HF, MP2, and DFT) are sensitive to the presence of heteroatomic rings. The helium atom was insensitive to the studied molecules at distances above 5 Å. Our results, obtained with BHandHLYP method, predicted fairly accurately the He-pyrrole plane separation of 3.15 Å (close to 3.24 Å, calculated by MP2) and yielded a sizable (3)He NMR chemical shift (about -1.5 ppm). The changes of calculated nucleus-independent chemical shifts (NICS) with the distance above the rings showed a very similar pattern to helium-3 NMR chemical shift. The ring currents above the five-membered rings were seen by helium magnetic probe to about 5 Å above the ring planes verified by the calculated NICS index. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Evaluation of dual flow thrust vectored nozzles with exhaust stream impingement. MS Thesis Final Technical Report, Oct. 1990 - Jul. 1991

    NASA Technical Reports Server (NTRS)

    Carpenter, Thomas W.

    1991-01-01

    The main objective of this project was to predict the expansion wave/oblique shock wave structure in an under-expanded jet expanding from a convergent nozzle. The shock structure was predicted by combining the calculated curvature of the free pressure boundary with principles and governing equations relating to oblique shock wave and expansion wave interaction. The procedure was then continued until the shock pattern repeated itself. A mathematical model was then formulated and written in FORTRAN to calculate the oblique shock/expansion wave structure within the jet. In order to study shock waves in expanding jets, Schlieren photography, a form of flow visualization, was employed. Thirty-six Schlieren photographs of jets from both a straight and 15 degree nozzle were taken. An iterative procedure was developed to calculate the shock structure within the jet and predict the non-dimensional values of Prandtl primary wavelength (w/rn), distance to Mach Disc (Ld) and Mach Disc radius (rd). These values were then compared to measurements taken from Schlieren photographs and experimental results. The results agreed closely to measurements from Schlieren photographs and previously obtained data. This method provides excellent results for pressure ratios below that at which a Mach Disc first forms. Calculated values of non-dimensional distance to the Mach Disc (Ld) agreed closely to values measured from Schlieren photographs and published data. The calculated values of non-dimensional Mach Disc radius (rd), however, deviated from published data by as much as 25 percent at certain pressure ratios.

  10. Comparison of Dynamic Balance in Collegiate Field Hockey and Football Players Using Star Excursion Balance Test

    PubMed Central

    Bhat, Rashi; Moiz, Jamal Ali

    2013-01-01

    Purpose The preliminary study aimed to compare dynamic balance between collegiate athletes competing or training in football and hockey using star excursion balance test. Methods A total thirty university level players, football (n = 15) and field hockey (n = 15) were participated in the study. Dynamic balance was assessed by using star excursion balance test. The testing grid consists of 8 lines each 120 cm in length extending from a common point at 45° increments. The subjects were instructed to maintain a stable single leg stance with the test leg with shoes off and to reach for maximal distance with the other leg in each of the 8 directions. A pencil was used to point and read the distance to which each subject's foot reached. The normalized leg reach distances in each direction were summed for both limbs and the total sum of the mean of summed normalized distances of both limbs were calculated. Results There was no significant difference in all the directions of star excursion balance test scores in both the groups. Additionally, composite reach distances of both groups also found non-significant (P=0.5). However, the posterior (P=0.05) and lateral (P=0.03) normalized reach distances were significantly more in field hockey players. Conclusion Field hockey players and football players did not differ in terms of dynamic balance. PMID:24427482

  11. The Reliability of Panoramic Radiography Versus Cone Beam Computed Tomography when Evaluating the Distance to the Alveolar Nerve in the Site of Lateral Teeth.

    PubMed

    Česaitienė, Gabrielė; Česaitis, Kęstutis; Junevičius, Jonas; Venskutonis, Tadas

    2017-07-04

    BACKGROUND The aim of this study was to compare the reliability of panoramic radiography (PR) and cone beam computed tomography (CBCT) in the evaluation of the distance of the roots of lateral teeth to the inferior alveolar nerve canal (IANC). MATERIAL AND METHODS 100 PR and 100 CBCT images that met the selection criteria were selected from the database. In PR images, the distances were measured using an electronic caliper with 0.01 mm accuracy and white light x-ray film reviewer. Actual values of the measurements were calculated taking into consideration the magnification used in PR images (130%). Measurements on CBCT images were performed using i-CAT Vision software. Statistical data analysis was performed using R software and applying Welch's t-test and the Wilcoxon test. RESULTS There was no statistically significant difference in the mean distance from the root of the second premolar and the mesial and distal roots of the first molar to the IANC between PR and CBCT images. The difference in the mean distance from the mesial and distal roots of the second and the third molars to the IANC measured in PR and CBCT images was statistically significant. CONCLUSIONS PR may be uninformative or misleading when measuring the distance from the mesial and distal roots of the second and the third molars to the IANC.

  12. The Reliability of Panoramic Radiography Versus Cone Beam Computed Tomography when Evaluating the Distance to the Alveolar Nerve in the Site of Lateral Teeth

    PubMed Central

    Česaitienė, Gabrielė; Česaitis, Kęstutis; Junevičius, Jonas; Venskutonis, Tadas

    2017-01-01

    Background The aim of this study was to compare the reliability of panoramic radiography (PR) and cone beam computed tomography (CBCT) in the evaluation of the distance of the roots of lateral teeth to the inferior alveolar nerve canal (IANC). Material/Methods 100 PR and 100 CBCT images that met the selection criteria were selected from the database. In PR images, the distances were measured using an electronic caliper with 0.01 mm accuracy and white light x-ray film reviewer. Actual values of the measurements were calculated taking into consideration the magnification used in PR images (130%). Measurements on CBCT images were performed using i-CAT Vision software. Statistical data analysis was performed using R software and applying Welch’s t-test and the Wilcoxon test. Results There was no statistically significant difference in the mean distance from the root of the second premolar and the mesial and distal roots of the first molar to the IANC between PR and CBCT images. The difference in the mean distance from the mesial and distal roots of the second and the third molars to the IANC measured in PR and CBCT images was statistically significant. Conclusions PR may be uninformative or misleading when measuring the distance from the mesial and distal roots of the second and the third molars to the IANC. PMID:28674379

  13. An Extremely Low Mid-infrared Extinction Law toward the Galactic Center and 4% Distance Precision to 55 Classical Cepheids

    NASA Astrophysics Data System (ADS)

    Chen, Xiaodian; Wang, Shu; Deng, Licai; de Grijs, Richard

    2018-06-01

    Distances and extinction values are usually degenerate. To refine the distance to the general Galactic Center region, a carefully determined extinction law (taking into account the prevailing systematic errors) is urgently needed. We collected data for 55 classical Cepheids projected toward the Galactic Center region to derive the near- to mid-infrared extinction law using three different approaches. The relative extinction values obtained are {A}J/{A}{K{{s}}}=3.005,{A}H/{A}{K{{s}}}=1.717, {A}[3.6]/{A}{K{{s}}}=0.478,{A}[4.5]/{A}{K{{s}}}=0.341, {A}[5.8]/{A}{K{{s}}}=0.234,{A}[8.0]/{A}{K{{s}}} =0.321,{A}W1/{A}{K{{s}}}=0.506, and {A}W2/{A}{K{{s}}}=0.340. We also calculated the corresponding systematic errors. Compared with previous work, we report an extremely low and steep mid-infrared extinction law. Using a seven-passband “optimal distance” method, we improve the mean distance precision to our sample of 55 Cepheids to 4%. Based on four confirmed Galactic Center Cepheids, a solar Galactocentric distance of R 0 = 8.10 ± 0.19 ± 0.22 kpc is determined, featuring an uncertainty that is close to the limiting distance accuracy (2.8%) for Galactic Center Cepheids.

  14. Auroral and photoelectron fluxes in cometary ionospheres

    NASA Astrophysics Data System (ADS)

    Bhardwaj, A.; Haider, S. A.; Spinghal, R. P.

    1990-05-01

    The analytical yield spectrum method has been used to ascertain photoelectron and auroral electron fluxes in cometary ionospheres, with a view to determining the effects of cometocentric distances, solar zenith angle, and solar minimum and maximum conditions. Auroral electron fluxes are thus calculated for monoenergetic and observed primary electron spectra; auroral electrons are found to make a larger contribution to the observed electron spectrum than EUV-generated photoelectrons. Good agreement is established with extant theoretical works.

  15. Potential energy curves of the Na2+ molecular ion from all-electron ab initio relativistic calculations

    NASA Astrophysics Data System (ADS)

    Bewicz, Anna; Musiał, Monika; Kucharski, Stanisław A.

    2017-11-01

    The equation-of-motion coupled-cluster method for electron affinity calculations has been used to study potential energy curves (PECs) for the Na+2 molecular ion. Although the studied molecule represents the open shell system the applied approach employs the closed shell Na+ 22 ion as the reference. In addition the Na+ 22 system dissociates into the closed shell fragments; hence, the restricted Hartree-Fock scheme can be used within the whole range of interatomic distances, from 2 to 45 Å. We used large basis set engaging 268 basis functions with all 21 electrons correlated. The relativistic effects are included via second-order Douglas-Kroll method. The computed PECs, spectroscopic molecular constants and vibrational energy levels agree well with experimental values if the latter are available or with other theoretical data.

  16. Mixed QM/MM molecular electrostatic potentials.

    PubMed

    Hernández, B; Luque, F J; Orozco, M

    2000-05-01

    A new method is presented for the calculation of the Molecular Electrostatic Potential (MEP) in large systems. Based on the mixed Quantum Mechanics/Molecular Mechanics (QM/MM) approach, the method assumes both a quantum and classical description for the molecule, and the calculation of the MEP in the space surrounding the molecule is made using this dual treatment. The MEP at points close to the molecule is computed using a full QM formalism, while a pure classical evaluation of the MEP is used for points located at large distances from the molecule. The algorithm allows the user to select the desired level of accuracy in the MEP, so that the definition of the regions where the MEP is computed at the classical or QM levels is adjusted automatically. The potential use of this QM/MM MEP in molecular modeling studies is discussed.

  17. Predictive landslide susceptibility mapping using spatial information in the Pechabun area of Thailand

    NASA Astrophysics Data System (ADS)

    Oh, Hyun-Joo; Lee, Saro; Chotikasathien, Wisut; Kim, Chang Hwan; Kwon, Ju Hyoung

    2009-04-01

    For predictive landslide susceptibility mapping, this study applied and verified probability model, the frequency ratio and statistical model, logistic regression at Pechabun, Thailand, using a geographic information system (GIS) and remote sensing. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys, and maps of the topography, geology and land cover were constructed to spatial database. The factors that influence landslide occurrence, such as slope gradient, slope aspect and curvature of topography and distance from drainage were calculated from the topographic database. Lithology and distance from fault were extracted and calculated from the geology database. Land cover was classified from Landsat TM satellite image. The frequency ratio and logistic regression coefficient were overlaid for landslide susceptibility mapping as each factor’s ratings. Then the landslide susceptibility map was verified and compared using the existing landslide location. As the verification results, the frequency ratio model showed 76.39% and logistic regression model showed 70.42% in prediction accuracy. The method can be used to reduce hazards associated with landslides and to plan land cover.

  18. Reproductive isolation between populations of Iris atropurpurea is associated with ecological differentiation

    PubMed Central

    Yardeni, Gil; Tessler, Naama; Imbert, Eric; Sapir, Yuval

    2016-01-01

    Background and Aims Speciation is often described as a continuous dynamic process, expressed by different magnitudes of reproductive isolation (RI) among groups in different levels of divergence. Studying intraspecific partial RI can shed light on mechanisms underlying processes of population divergence. Intraspecific divergence can be driven by spatially stochastic accumulation of genetic differences following reduced gene flow, resulting in increased RI with increased geographical distance, or by local adaptation, resulting in increased RI with environmental difference. Methods We tested for RI as a function of both geographical distance and ecological differentiation in Iris atropurpurea, an endemic Israeli coastal plant. We crossed plants in the Netanya Iris Reserve population with plants from 14 populations across the species’ full distribution, and calculated RI and reproductive success based on fruit set, seed set and fraction of seed viability. Key Results We found that total RI was not significantly associated with geographical distance, but significantly increased with ecological distance. Similarly, reproductive success of the crosses, estimated while controlling for the dependency of each component on the previous stage, significantly reduced with increased ecological distance. Conclusions Our results indicate that the rise of post-pollination reproductive barriers in I. atropurpurea is more affected by ecological differentiation between populations than by geographical distance, supporting the hypothesis that ecological differentiation is predominant over isolation by distance and by reduced gene flow in this species. These findings also affect conservation management, such as genetic rescue, in the highly fragmented and endangered I. atropurpurea. PMID:27436798

  19. A CT-based software tool for evaluating compensator quality in passively scattered proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Heng; Zhang, Lifei; Dong, Lei; Sahoo, Narayan; Gillin, Michael T.; Zhu, X. Ronald

    2010-11-01

    We have developed a quantitative computed tomography (CT)-based quality assurance (QA) tool for evaluating the accuracy of manufactured compensators used in passively scattered proton therapy. The thickness of a manufactured compensator was measured from its CT images and compared with the planned thickness defined by the treatment planning system. The difference between the measured and planned thicknesses was calculated with use of the Euclidean distance transformation and the kd-tree search method. Compensator accuracy was evaluated by examining several parameters including mean distance, maximum distance, global thickness error and central axis shifts. Two rectangular phantoms were used to validate the performance of the QA tool. Nine patients and 20 compensators were included in this study. We found that mean distances, global thickness errors and central axis shifts were all within 1 mm for all compensators studied, with maximum distances ranging from 1.1 to 3.8 mm. Although all compensators passed manual verification at selected points, about 5% of the pixels still had maximum distances of >2 mm, most of which correlated with large depth gradients. The correlation between the mean depth gradient of the compensator and the percentage of pixels with mean distance <1 mm is -0.93 with p < 0.001, which suggests that the mean depth gradient is a good indicator of compensator complexity. These results demonstrate that the CT-based compensator QA tool can be used to quantitatively evaluate manufactured compensators.

  20. Hole-ness of point clouds

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.

    2015-04-01

    Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the distance to the closest original point cloud member has been calculated. For the resulting set of distances, histograms have been produced that show the distribution of point distances. As the Poisson points also make up a connected mesh, the size and distribution of single holes can also be estimated by labeling Poisson points that belong to the same hole: each hole gets a specific number. Afterwards, the area of the mesh formed by each set of Poisson hole points can be calculated. The result is a set of distinctive holes and their sizes. The two approaches showed that the hole-ness of the point cloud depends on the soil moisture respectively the reflectivity: the distance distribution of the model of the saturated soil shows the smallest number of large distances. The histogram of the medium state shows more large distances and the dry model shows the largest distances. Models resulting from indirect lighting are better than the models resulting from direct light for all moisture states.

  1. Three-dimensional displacement measurement of image point by point-diffraction interferometry

    NASA Astrophysics Data System (ADS)

    He, Xiao; Chen, Lingfeng; Meng, Xiaojie; Yu, Lei

    2018-01-01

    This paper presents a method for measuring the three-dimensional (3-D) displacement of an image point based on point-diffraction interferometry. An object Point-light-source (PLS) interferes with a fixed PLS and its interferograms are captured by an exit pupil. When the image point of the object PLS is slightly shifted to a new position, the wavefront of the image PLS changes. And its interferograms also change. Processing these figures (captured before and after the movement), the wavefront difference of the image PLS can be obtained and it contains the information of three-dimensional (3-D) displacement of the image PLS. However, the information of its three-dimensional (3-D) displacement cannot be calculated until the distance between the image PLS and the exit pupil is calibrated. Therefore, we use a plane-parallel-plate with a known refractive index and thickness to determine this distance, which is based on the Snell's law for small angle of incidence. Thus, since the distance between the exit pupil and the image PLS is a known quantity, the 3-D displacement of the image PLS can be simultaneously calculated through two interference measurements. Preliminary experimental results indicate that its relative error is below 0.3%. With the ability to accurately locate an image point (whatever it is real or virtual), a fiber point-light-source can act as the reticle by itself in optical measurement.

  2. A Self-Adaptive Capacitive Compensation Technique for Body Channel Communication.

    PubMed

    Mao, Jingna; Yang, Huazhong; Lian, Yong; Zhao, Bo

    2017-10-01

    In wireless body area network, capacitive-coupling body channel communication (CC-BCC) has the potential to attain better energy efficiency over conventional wireless communication schemes. The CC-BCC scheme utilizes the human body as the forward signal transmission medium, reducing the path loss in wireless body-centric communications. However, the backward path is formed by the coupling capacitance between the ground electrodes (GEs) of transmitter (Tx) and receiver (Rx), which increases the path loss and results in a body posture dependent backward impedance. Conventional methods use a fixed inductor to resonate with the backward capacitor to compensate the path loss, while it's not effective in compensating the variable backward impedance induced by the body movements. In this paper, we propose a self-adaptive capacitive compensation (SACC) technique to address such a problem. A backward distance detector is introduced to estimate the distance between two GEs of Tx and Rx, and a backward capacitance model is built to calculate the backward capacitance. The calculated backward capacitance at varying body posture is compensated by a digitally controlled tunable inductor (DCTI). The proposed SACC technique is validated by a prototype CC-BCC system, and measurements are taken on human subjects. The measurement results show that 9dB-16 dB channel enhancement can be achieved at a backward path distance of 1 cm-10 cm.

  3. Measurement of gas diffusion coefficient in liquid-saturated porous media using magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Song, Yongchen; Hao, Min; Zhao, Yuechao; Zhang, Liang

    2014-12-01

    In this study, the dual-chamber pressure decay method and magnetic resonance imaging (MRI) were used to dynamically visualize the gas diffusion process in liquid-saturated porous media, and the relationship of concentration-distance for gas diffusing into liquid-saturated porous media at different times were obtained by MR images quantitative analysis. A non-iterative finite volume method was successfully applied to calculate the local gas diffusion coefficient in liquid-saturated porous media. The results agreed very well with the conventional pressure decay method, thus it demonstrates that the method was feasible of determining the local diffusion coefficient of gas in liquid-saturated porous media at different times during diffusion process.

  4. Determination of celestial bodies orbits and probabilities of their collisions with the Earth

    NASA Astrophysics Data System (ADS)

    Medvedev, Yuri; Vavilov, Dmitrii

    In this work we have developed a universal method to determine the small bodies orbits in the Solar System. In the method we consider different planes of body’s motion and pick up which is the most appropriate. Given an orbit plane we can calculate geocentric distances at time of observations and consequence determinate all orbital elements. Another technique that we propose here addresses the problem of estimation probability of collisions celestial bodies with the Earth. This technique uses the coordinate system associated with the nominal osculating orbit. We have compared proposed technique with the Monte-Carlo simulation. Results of these methods exhibit satisfactory agreement, whereas, proposed method is advantageous in time performance.

  5. The Comparison Study of Quadratic Infinite Beam Program on Optimization Instensity Modulated Radiation Therapy Treatment Planning (IMRTP) between Threshold and Exponential Scatter Method with CERR® In The Case of Lung Cancer

    NASA Astrophysics Data System (ADS)

    Hardiyanti, Y.; Haekal, M.; Waris, A.; Haryanto, F.

    2016-08-01

    This research compares the quadratic optimization program on Intensity Modulated Radiation Therapy Treatment Planning (IMRTP) with the Computational Environment for Radiotherapy Research (CERR) software. We assumed that the number of beams used for the treatment planner was about 9 and 13 beams. The case used the energy of 6 MV with Source Skin Distance (SSD) of 100 cm from target volume. Dose calculation used Quadratic Infinite beam (QIB) from CERR. CERR was used in the comparison study between Gauss Primary threshold method and Gauss Primary exponential method. In the case of lung cancer, the threshold variation of 0.01, and 0.004 was used. The output of the dose was distributed using an analysis in the form of DVH from CERR. The maximum dose distributions obtained were on the target volume (PTV) Planning Target Volume, (CTV) Clinical Target Volume, (GTV) Gross Tumor Volume, liver, and skin. It was obtained that if the dose calculation method used exponential and the number of beam 9. When the dose calculation method used the threshold and the number of beam 13, the maximum dose distributions obtained were on the target volume PTV, GTV, heart, and skin.

  6. Neutron track length estimator for GATE Monte Carlo dose calculation in radiotherapy.

    PubMed

    Elazhar, H; Deschler, T; Létang, J M; Nourreddine, A; Arbor, N

    2018-06-20

    The out-of-field dose in radiation therapy is a growing concern in regards to the late side-effects and secondary cancer induction. In high-energy x-ray therapy, the secondary neutrons generated through photonuclear reactions in the accelerator are part of this secondary dose. The neutron dose is currently not estimated by the treatment planning system while it appears to be preponderant for distances greater than 50 cm from the isocenter. Monte Carlo simulation has become the gold standard for accurately calculating the neutron dose under specific treatment conditions but the method is also known for having a slow statistical convergence, which makes it difficult to be used on a clinical basis. The neutron track length estimator, a neutron variance reduction technique inspired by the track length estimator method has thus been developped for the first time in the Monte Carlo code GATE to allow a fast computation of the neutron dose in radiotherapy. The details of its implementation, as well as the comparison of its performances against the analog MC method, are presented here. A gain of time from 15 to 400 can be obtained by our method, with a mean difference in the dose calculation of about 1% in comparison with the analog MC method.

  7. Simulation of therapeutic electron beam tracking through a non-uniform magnetic field using finite element method

    PubMed Central

    Tahmasebibirgani, Mohammad Javad; Maskani, Reza; Behrooz, Mohammad Ali; Zabihzadeh, Mansour; Shahbazian, Hojatollah; Fatahiasl, Jafar; Chegeni, Nahid

    2017-01-01

    Introduction In radiotherapy, megaelectron volt (MeV) electrons are employed for treatment of superficial cancers. Magnetic fields can be used for deflection and deformation of the electron flow. A magnetic field is composed of non-uniform permanent magnets. The primary electrons are not mono-energetic and completely parallel. Calculation of electron beam deflection requires using complex mathematical methods. In this study, a device was made to apply a magnetic field to an electron beam and the path of electrons was simulated in the magnetic field using finite element method. Methods A mini-applicator equipped with two neodymium permanent magnets was designed that enables tuning the distance between magnets. This device was placed in a standard applicator of Varian 2100 CD linear accelerator. The mini-applicator was simulated in CST Studio finite element software. Deflection angle and displacement of the electron beam was calculated after passing through the magnetic field. By determining a 2 to 5cm distance between two poles, various intensities of transverse magnetic field was created. The accelerator head was turned so that the deflected electrons became vertical to the water surface. To measure the displacement of the electron beam, EBT2 GafChromic films were employed. After being exposed, the films were scanned using HP G3010 reflection scanner and their optical density was extracted using programming in MATLAB environment. Displacement of the electron beam was compared with results of simulation after applying the magnetic field. Results Simulation results of the magnetic field showed good agreement with measured values. Maximum deflection angle for a 12 MeV beam was 32.9° and minimum deflection for 15 MeV was 12.1°. Measurement with the film showed precision of simulation in predicting the amount of displacement in the electron beam. Conclusion A magnetic mini-applicator was made and simulated using finite element method. Deflection angle and displacement of electron beam were calculated. With the method used in this study, a good prediction of the path of high-energy electrons was made before they entered the body. PMID:28607652

  8. Runge-Kutta methods combined with compact difference schemes for the unsteady Euler equations

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1992-01-01

    Recent development using compact difference schemes to solve the Navier-Stokes equations show spectral-like accuracy. A study was made of the numerical characteristics of various combinations of the Runge-Kutta (RK) methods and compact difference schemes to calculate the unsteady Euler equations. The accuracy of finite difference schemes is assessed based on the evaluations of dissipative error. The objectives are reducing the numerical damping and, at the same time, preserving numerical stability. While this approach has tremendous success solving steady flows, numerical characteristics of unsteady calculations remain largely unclear. For unsteady flows, in addition to the dissipative errors, phase velocity and harmonic content of the numerical results are of concern. As a result of the discretization procedure, the simulated unsteady flow motions actually propagate in a dispersive numerical medium. Consequently, the dispersion characteristics of the numerical schemes which relate the phase velocity and wave number may greatly impact the numerical accuracy. The aim is to assess the numerical accuracy of the simulated results. To this end, the Fourier analysis is to provide the dispersive correlations of various numerical schemes. First, a detailed investigation of the existing RK methods is carried out. A generalized form of an N-step RK method is derived. With this generalized form, the criteria are derived for the three and four-step RK methods to be third and fourth-order time accurate for the non-linear equations, e.g., flow equations. These criteria are then applied to commonly used RK methods such as Jameson's 3-step and 4-step schemes and Wray's algorithm to identify the accuracy of the methods. For the spatial discretization, compact difference schemes are presented. The schemes are formulated in the operator-type to render themselves suitable for the Fourier analyses. The performance of the numerical methods is shown by numerical examples. These examples are detailed. described. The third case is a two-dimensional simulation of a Lamb vortex in an uniform flow. This calculation provides a realistic assessment of various finite difference schemes in terms of the conservation of the vortex strength and the harmonic content after travelling a substantial distance. The numerical implementation of Giles' non-refelctive equations coupled with the characteristic equations as the boundary condition is discussed in detail. Finally, the single vortex calculation is extended to simulate vortex pairing. For the distance between two vortices less than a threshold value, numerical results show crisp resolution of the vortex merging.

  9. Emitter location errors in electronic recognition system

    NASA Astrophysics Data System (ADS)

    Matuszewski, Jan; Dikta, Anna

    2017-04-01

    The paper describes some of the problems associated with emitter location calculations. This aspect is the most important part of the series of tasks in the electronic recognition systems. The basic tasks include: detection of emission of electromagnetic signals, tracking (determining the direction of emitter sources), signal analysis in order to classify different emitter types and the identification of the sources of emission of the same type. The paper presents a brief description of the main methods of emitter localization and the basic mathematical formulae for calculating their location. The errors' estimation has been made to determine the emitter location for three different methods and different scenarios of emitters and direction finding (DF) sensors deployment in the electromagnetic environment. The emitter has been established using a special computer program. On the basis of extensive numerical calculations, the evaluation of precise emitter location in the recognition systems for different configuration alignment of bearing devices and emitter was conducted. The calculations which have been made based on the simulated data for different methods of location are presented in the figures and respective tables. The obtained results demonstrate that calculation of the precise emitter location depends on: the number of DF sensors, the distances between emitter and DF sensors, their mutual location in the reconnaissance area and bearing errors. The precise emitter location varies depending on the number of obtained bearings. The higher the number of bearings, the better the accuracy of calculated emitter location in spite of relatively high bearing errors for each DF sensor.

  10. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  11. High-accuracy optical extensometer based on coordinate transform in two-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan

    2018-01-01

    In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.

  12. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil †

    PubMed Central

    Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao

    2018-01-01

    An innovative array of magnetic coils (the discrete Rogowski coil—RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors. PMID:29534006

  13. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil.

    PubMed

    Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao

    2018-03-13

    An innovative array of magnetic coils (the discrete Rogowski coil-RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC's interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.

  14. How's the Flu Getting Through? Landscape genetics suggests both humans and birds spread H5N1 in Egypt.

    PubMed

    Young, Sean G; Carrel, Margaret; Kitchen, Andrew; Malanson, George P; Tamerius, James; Ali, Mohamad; Kayali, Ghazi

    2017-04-01

    First introduced to Egypt in 2006, H5N1 highly pathogenic avian influenza has resulted in the death of millions of birds and caused over 350 infections and at least 117 deaths in humans. After a decade of viral circulation, outbreaks continue to occur and diffusion mechanisms between poultry farms remain unclear. Using landscape genetics techniques, we identify the distance models most strongly correlated with the genetic relatedness of the viruses, suggesting the most likely methods of viral diffusion within Egyptian poultry. Using 73 viral genetic sequences obtained from infected birds throughout northern Egypt between 2009 and 2015, we calculated the genetic dissimilarity between H5N1 viruses for all eight gene segments. Spatial correlation was evaluated using Mantel tests and correlograms and multiple regression of distance matrices within causal modeling and relative support frameworks. These tests examine spatial patterns of genetic relatedness, and compare different models of distance. Four models were evaluated: Euclidean distance, road network distance, road network distance via intervening markets, and a least-cost path model designed to approximate wild waterbird travel using niche modeling and circuit theory. Samples from backyard farms were most strongly correlated with least cost path distances. Samples from commercial farms were most strongly correlated with road network distances. Results were largely consistent across gene segments. Results suggest wild birds play an important role in viral diffusion between backyard farms, while commercial farms experience human-mediated diffusion. These results can inform avian influenza surveillance and intervention strategies in Egypt. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. The ideal subject distance for passport pictures.

    PubMed

    Verhoff, Marcel A; Witzel, Carsten; Kreutz, Kerstin; Ramsthaler, Frank

    2008-07-04

    In an age of global combat against terrorism, the recognition and identification of people on document images is of increasing significance. Experiments and calculations have shown that the camera-to-subject distance - not the focal length of the lens - can have a significant effect on facial proportions. Modern passport pictures should be able to function as a reference image for automatic and manual picture comparisons. This requires a defined subject distance. It is completely unclear which subject distance, in the taking of passport photographs, is ideal for the recognition of the actual person. We show here that the camera-to-subject distance that is perceived as ideal is dependent on the face being photographed, even if the distance of 2m was most frequently preferred. So far the problem of the ideal camera-to-subject distance for faces has only been approached through technical calculations. We have, for the first time, answered this question experimentally with a double-blind experiment. Even if there is apparently no ideal camera-to-subject distance valid for every face, 2m can be proposed as ideal for the taking of passport pictures. The first step would actually be the determination of a camera-to-subject distance for the taking of passport pictures within the standards. From an anthropological point of view it would be interesting to find out which facial features allow the preference of a shorter camera-to-subject distance and which allow the preference of a longer camera-to-subject distance.

  16. WE-E-18A-03: How Accurately Can the Peak Skin Dose in Fluoroscopy Be Determined Using Indirect Dose Metrics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, A; Pasciak, A

    Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that Result in skin reactions can be reached during these procedures. The purpose of this study was to assess the accuracy of different indirect dose estimates and to determine if PSD can be calculated within ±50% for embolization procedures. Methods: PSD were measured directly using radiochromic film for 41 consecutive embolization procedures. Indirect dose metrics from procedures were collected, including reference air kerma (RAK). Four different estimates of PSD were calculated and compared along with RAK to the measured PSD. The indirect estimates included a standard method,more » use of detailed information from the RDSR, and two simplified calculation methods. Indirect dosimetry was compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the indirect estimates were examined. Results: PSD calculated with the standard calculation method were within ±50% for all 41 procedures. This was also true for a simplified method using a single source-to-patient distance (SPD) for all calculations. RAK was within ±50% for all but one procedure. Cases for which RAK or calculated PSD exhibited large differences from the measured PSD were analyzed, and two causative factors were identified: ‘extreme’ SPD and large contributions to RAK from rotational angiography or runs acquired at large gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±50% for embolization procedures, and usually to within ±35%. RAK can be used without modification to set notification limits and substantial radiation dose levels. These results can be extended to similar procedures, including vascular and interventional oncology. Film dosimetry is likely an unnecessary effort for these types of procedures.« less

  17. A discrete search algorithm for finding the structure of protein backbones and side chains.

    PubMed

    Sallaume, Silas; Martins, Simone de Lima; Ochi, Luiz Satoru; Da Silva, Warley Gramacho; Lavor, Carlile; Liberti, Leo

    2013-01-01

    Some information about protein structure can be obtained by using Nuclear Magnetic Resonance (NMR) techniques, but they provide only a sparse set of distances between atoms in a protein. The Molecular Distance Geometry Problem (MDGP) consists in determining the three-dimensional structure of a molecule using a set of known distances between some atoms. Recently, a Branch and Prune (BP) algorithm was proposed to calculate the backbone of a protein, based on a discrete formulation for the MDGP. We present an extension of the BP algorithm that can calculate not only the protein backbone, but the whole three-dimensional structure of proteins.

  18. Shape Sensing Using a Multi-Core Optical Fiber Having an Arbitrary Initial Shape in the Presence of Extrinsic Forces

    NASA Technical Reports Server (NTRS)

    Rogge, Matthew D. (Inventor); Moore, Jason P. (Inventor)

    2014-01-01

    Shape of a multi-core optical fiber is determined by positioning the fiber in an arbitrary initial shape and measuring strain over the fiber's length using strain sensors. A three-coordinate p-vector is defined for each core as a function of the distance of the corresponding cores from a center point of the fiber and a bending angle of the cores. The method includes calculating, via a controller, an applied strain value of the fiber using the p-vector and the measured strain for each core, and calculating strain due to bending as a function of the measured and the applied strain values. Additionally, an apparent local curvature vector is defined for each core as a function of the calculated strain due to bending. Curvature and bend direction are calculated using the apparent local curvature vector, and fiber shape is determined via the controller using the calculated curvature and bend direction.

  19. Geographic Information System and tools of spatial analysis in a pneumococcal vaccine trial

    PubMed Central

    2012-01-01

    Background The goal of this Geographic Information System (GIS) study was to obtain accurate information on the locations of study subjects, road network and services for research purposes so that the clinical outcomes of interest (e.g., vaccine efficacy, burden of disease, nasopharyngeal colonization and its reduction) could be linked and analyzed at a distance from health centers, hospitals, doctors and other important services. The information on locations can be used to investigate more accurate crowdedness, herd immunity and/or transmission patterns. Method A randomized, placebo-controlled, double-blind trial of an 11-valent pneumococcal conjugate vaccine (11PCV) was conducted in Bohol Province in central Philippines, from July 2000 to December 2004. We collected the information on the geographic location of the households (N = 13,208) of study subjects. We also collected a total of 1982 locations of health and other services in the six municipalities and a comprehensive GIS data over the road network in the area. Results We calculated the numbers of other study subjects (vaccine and placebo recipients, respectively) within the neighborhood of each study subject. We calculated distances to different services and identified the subjects sharing the same services (calculated by distance). This article shows how to collect a complete GIS data set for human to human transmitted vaccine study in developing country settings in an efficient and economical way. Conclusions The collection of geographic locations in intervention trials should become a routine task. The results of public health research may highly depend on spatial relationships among the study subjects and between the study subjects and the environment, both natural and infrastructural. Trial registration number ISRCTN: ISRCTN62323832 PMID:22264271

  20. Local stellar kinematics from RAVE data—VIII. Effects of the Galactic disc perturbations on stellar orbits of red clump stars

    NASA Astrophysics Data System (ADS)

    Önal Taş, Ö.; Bilir, S.; Plevne, O.

    2018-02-01

    We aim to probe the dynamic structure of the extended Solar neighborhood by calculating the radial metallicity gradients from orbit properties, which are obtained for axisymmetric and non-axisymmetric potential models, of red clump (RC) stars selected from the RAdial Velocity Experiment's Fourth Data Release. Distances are obtained by assuming a single absolute magnitude value in near-infrared, i.e. M_{Ks}=-1.54±0.04 mag, for each RC star. Stellar orbit parameters are calculated by using the potential functions: (i) for the MWPotential2014 potential, (ii) for the same potential with perturbation functions of the Galactic bar and transient spiral arms. The stellar age is calculated with a method based on Bayesian statistics. The radial metallicity gradients are evaluated based on the maximum vertical distance (z_{max}) from the Galactic plane and the planar eccentricity (ep) of RC stars for both of the potential models. The largest radial metallicity gradient in the 0< z_{max} ≤0.5 kpc distance interval is -0.065±0.005 dex kpc^{-1} for a subsample with ep≤0.1, while the lowest value is -0.014±0.006 dex kpc^{-1} for the subsample with ep≤0.5. We find that at z_{max}>1 kpc, the radial metallicity gradients have zero or positive values and they do not depend on ep subsamples. There is a large radial metallicity gradient for thin disc, but no radial gradient found for thick disc. Moreover, the largest radial metallicity gradients are obtained where the outer Lindblad resonance region is effective. We claim that this apparent change in radial metallicity gradients in the thin disc is a result of orbital perturbation originating from the existing resonance regions.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleury, Pierre; Uzan, Jean-Philippe; Larena, Julien, E-mail: fleury@iap.fr, E-mail: j.larena@ru.ac.za, E-mail: uzan@iap.fr

    On the scale of the light beams subtended by small sources, e.g. supernovae, matter cannot be accurately described as a fluid, which questions the applicability of standard cosmic lensing to those cases. In this article, we propose a new formalism to deal with small-scale lensing as a diffusion process: the Sachs and Jacobi equations governing the propagation of narrow light beams are treated as Langevin equations. We derive the associated Fokker-Planck-Kolmogorov equations, and use them to deduce general analytical results on the mean and dispersion of the angular distance. This formalism is applied to random Einstein-Straus Swiss-cheese models, allowing usmore » to: (1) show an explicit example of the involved calculations; (2) check the validity of the method against both ray-tracing simulations and direct numerical integration of the Langevin equation. As a byproduct, we obtain a post-Kantowski-Dyer-Roeder approximation, accounting for the effect of tidal distortions on the angular distance, in excellent agreement with numerical results. Besides, the dispersion of the angular distance is correctly reproduced in some regimes.« less

  2. The theory of stochastic cosmological lensing

    NASA Astrophysics Data System (ADS)

    Fleury, Pierre; Larena, Julien; Uzan, Jean-Philippe

    2015-11-01

    On the scale of the light beams subtended by small sources, e.g. supernovae, matter cannot be accurately described as a fluid, which questions the applicability of standard cosmic lensing to those cases. In this article, we propose a new formalism to deal with small-scale lensing as a diffusion process: the Sachs and Jacobi equations governing the propagation of narrow light beams are treated as Langevin equations. We derive the associated Fokker-Planck-Kolmogorov equations, and use them to deduce general analytical results on the mean and dispersion of the angular distance. This formalism is applied to random Einstein-Straus Swiss-cheese models, allowing us to: (1) show an explicit example of the involved calculations; (2) check the validity of the method against both ray-tracing simulations and direct numerical integration of the Langevin equation. As a byproduct, we obtain a post-Kantowski-Dyer-Roeder approximation, accounting for the effect of tidal distortions on the angular distance, in excellent agreement with numerical results. Besides, the dispersion of the angular distance is correctly reproduced in some regimes.

  3. Accuracy of digital models generated by conventional impression/plaster-model methods and intraoral scanning.

    PubMed

    Tomita, Yuki; Uechi, Jun; Konno, Masahiro; Sasamoto, Saera; Iijima, Masahiro; Mizoguchi, Itaru

    2018-04-17

    We compared the accuracy of digital models generated by desktop-scanning of conventional impression/plaster models versus intraoral scanning. Eight ceramic spheres were attached to the buccal molar regions of dental epoxy models, and reference linear-distance measurement were determined using a contact-type coordinate measuring instrument. Alginate (AI group) and silicone (SI group) impressions were taken and converted into cast models using dental stone; the models were scanned using desktop scanner. As an alternative, intraoral scans were taken using an intraoral scanner, and digital models were generated from these scans (IOS group). Twelve linear-distance measurement combinations were calculated between different sphere-centers for all digital models. There were no significant differences among the three groups using total of six linear-distance measurements. When limited to five lineardistance measurement, the IOS group showed significantly higher accuracy compared to the AI and SI groups. Intraoral scans may be more accurate compared to scans of conventional impression/plaster models.

  4. Local Elastic Constants for Epoxy-Nanotube Composites from Molecular Dynamics Simulation

    NASA Technical Reports Server (NTRS)

    Frankland, S. J. V.; Gates, T. S.

    2007-01-01

    A method from molecular dynamics simulation is developed for determining local elastic constants of an epoxy/nanotube composite. The local values of C11, C33, K12, and K13 elastic constants are calculated for an epoxy/nanotube composite as a function of radial distance from the nanotube. While the results possess a significant amount of statistical uncertainty resulting from both the numerical analysis and the molecular fluctuations during the simulation, the following observations can be made. If the size of the region around the nanotube is increased from shells of 1 to 6 in thickness, then the scatter in the data reduces enough to observe trends. All the elastic constants determined are at a minimum 20 from the center of the nanotube. The C11, C33, and K12 follow similar trends as a function of radial distance from the nanotube. The K13 decreases greater distances from the nanotube and becomes negative which may be a symptom of the statistical averaging.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buta, R.; de Vaucouleurs, G.

    The diameters d/sub r/ of inner ring structures in disk galaxies are used as geometric distance indicators to derive the distances of 453 spiral and lenticular galaxies, mainly in the distance interval 4<..delta..<63 Mpc. The diameters are weighted means from the catalogs to Kormendy, Pedreros and Madore, and the authors. The distances are calculated by means of the two- and three-parameter formulae of Paper II; the adopted mean distance moduli ..mu../sub 0/(r) have mean errors from all sources of 0.6--0.7 mag for the well-observed galaxies.

  6. Preliminary exploration of the measurement of walking speed for the apoplectic people based on UHF RFID.

    PubMed

    Huang Hua-Lin; Mo Ling-Fei; Liu Ying-Jie; Li Cheng-Yang; Xu Qi-Meng; Wu Zhi-Tong

    2015-08-01

    The number of the apoplectic people is increasing while population aging is quickening its own pace. The precise measurement of walking speed is very important to the rehabilitation guidance of the apoplectic people. The precision of traditional measuring methods on speed such as stopwatch is relatively low, and high precision measurement instruments because of the high cost cannot be used widely. What's more, these methods have difficulty in measuring the walking speed of the apoplectic people accurately. UHF RFID tag has the advantages of small volume, low price, long reading distance etc, and as a wearable sensor, it is suitable to measure walking speed accurately for the apoplectic people. In order to measure the human walking speed, this paper uses four reader antennas with a certain distance to reads the signal strength of RFID tag. Because RFID tag has different RSSI (Received Signal Strength Indicator) in different distances away from the reader, researches on the changes of RSSI with time have been done by this paper to calculate walking speed. The verification results show that the precise measurement of walking speed can be realized by signal processing method with Gaussian Fitting-Kalman Filter. Depending on the variance of walking speed, doctors can predict the rehabilitation training result of the apoplectic people and give the appropriate rehabilitation guidance.

  7. Stopping Distances: An Excellent Example of Empirical Modelling.

    ERIC Educational Resources Information Center

    Lawson, D. A.; Tabor, J. H.

    2001-01-01

    Explores the derivation of empirical models for the stopping distance of a car being driven at a range of speeds. Indicates that the calculation of stopping distances makes an excellent example of empirical modeling because it is a situation that is readily understood and particularly relevant to many first-year undergraduates who are learning or…

  8. Nucleon form factors in dispersively improved chiral effective field theory: Scalar form factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alarcon Soriano, Jose Manuel; Weiss, Christian

    We propose a method for calculating the nucleon form factors (FFs) ofmore » $G$-parity-even operators by combining Chiral Effective Field Theory ($$\\chi$$EFT) and dispersion analysis. The FFs are expressed as dispersive integrals over the two-pion cut at $$t > 4 M_\\pi^2$$. The spectral functions are obtained from the elastic unitarity condition and expressed as products of the complex $$\\pi\\pi \\rightarrow N\\bar N$$ partial-wave amplitudes and the timelike pion FF. $$\\chi$$EFT is used to calculate the ratio of the partial-wave amplitudes and the pion FF, which is real and free of $$\\pi\\pi$$ rescattering in the $t$-channel ($N/D$ method). The rescattering effects are then incorporated by multiplying with the squared modulus of the empirical pion FF. The procedure results in a marked improvement compared to conventional $$\\chi$$EFT calculations of the spectral functions. We apply the method to the nucleon scalar FF and compute the scalar spectral function, the scalar radius, the $t$-dependent FF, and the Cheng-Dashen discrepancy. Higher-order chiral corrections are estimated through the $$\\pi N$$ low-energy constants. Results are in excellent agreement with dispersion-theoretical calculations. We elaborate several other interesting aspects of our method. The results show proper scaling behavior in the large-$$N_c$$ limit of QCD because the $$\\chi$$EFT includes $N$ and $$\\Delta$$ intermediate states. The squared modulus of the timelike pion FF required by our method can be extracted from Lattice QCD calculations of vacuum correlation functions of the operator at large Euclidean distances. Our method can be applied to the nucleon FFs of other operators of interest, such as the isovector-vector current, the energy-momentum tensor, and twist-2 QCD operators (moments of generalized parton distributions).« less

  9. Nucleon form factors in dispersively improved chiral effective field theory: Scalar form factor

    DOE PAGES

    Alarcon Soriano, Jose Manuel; Weiss, Christian

    2017-11-20

    We propose a method for calculating the nucleon form factors (FFs) ofmore » $G$-parity-even operators by combining Chiral Effective Field Theory ($$\\chi$$EFT) and dispersion analysis. The FFs are expressed as dispersive integrals over the two-pion cut at $$t > 4 M_\\pi^2$$. The spectral functions are obtained from the elastic unitarity condition and expressed as products of the complex $$\\pi\\pi \\rightarrow N\\bar N$$ partial-wave amplitudes and the timelike pion FF. $$\\chi$$EFT is used to calculate the ratio of the partial-wave amplitudes and the pion FF, which is real and free of $$\\pi\\pi$$ rescattering in the $t$-channel ($N/D$ method). The rescattering effects are then incorporated by multiplying with the squared modulus of the empirical pion FF. The procedure results in a marked improvement compared to conventional $$\\chi$$EFT calculations of the spectral functions. We apply the method to the nucleon scalar FF and compute the scalar spectral function, the scalar radius, the $t$-dependent FF, and the Cheng-Dashen discrepancy. Higher-order chiral corrections are estimated through the $$\\pi N$$ low-energy constants. Results are in excellent agreement with dispersion-theoretical calculations. We elaborate several other interesting aspects of our method. The results show proper scaling behavior in the large-$$N_c$$ limit of QCD because the $$\\chi$$EFT includes $N$ and $$\\Delta$$ intermediate states. The squared modulus of the timelike pion FF required by our method can be extracted from Lattice QCD calculations of vacuum correlation functions of the operator at large Euclidean distances. Our method can be applied to the nucleon FFs of other operators of interest, such as the isovector-vector current, the energy-momentum tensor, and twist-2 QCD operators (moments of generalized parton distributions).« less

  10. pySeismicFMM: Python based Travel Time Calculation in Regular 2D and 3D Grids in Cartesian and Geographic Coordinates using Fast Marching Method

    NASA Astrophysics Data System (ADS)

    Wilde-Piorko, M.; Polkowski, M.

    2016-12-01

    Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  11. Glue detection based on teaching points constraint and tracking model of pixel convolution

    NASA Astrophysics Data System (ADS)

    Geng, Lei; Ma, Xiao; Xiao, Zhitao; Wang, Wen

    2018-01-01

    On-line glue detection based on machine version is significant for rust protection and strengthening in car production. Shadow stripes caused by reflect light and unevenness of inside front cover of car reduce the accuracy of glue detection. In this paper, we propose an effective algorithm to distinguish the edges of the glue and shadow stripes. Teaching points are utilized to calculate slope between the two adjacent points. Then a tracking model based on pixel convolution along motion direction is designed to segment several local rectangular regions using distance. The distance is the height of rectangular region. The pixel convolution along the motion direction is proposed to extract edges of gules in local rectangular region. A dataset with different illumination and complexity shape stripes are used to evaluate proposed method, which include 500 thousand images captured from the camera of glue gun machine. Experimental results demonstrate that the proposed method can detect the edges of glue accurately. The shadow stripes are distinguished and removed effectively. Our method achieves the 99.9% accuracies for the image dataset.

  12. Influence des interactions entre écrans de soutènement sur le calcul de la butée

    NASA Astrophysics Data System (ADS)

    Magnan, Jean-Pierre; Meyer, Grégory

    2018-05-01

    La mobilisation de la butée devant un écran implique un volume de sol important, sur une distance plus grande que la fiche et qui dépend des paramètres du calcul. L'article passe en revue les méthodes de calcul utilisées pour évaluer la butée, en insistant sur la distance nécessaire au libre développement du mécanisme de butée. Il évalue ensuite de différentes façons l'effet de l'interaction entre deux écrans placés face à face de part et d'autre d'une excavation. La méthode recommandée pour calculer la butée mobilisable consiste à faire un calcul en éléments finis avec des valeurs réduites des paramètres de résistance au cisaillement dans la zone où se développera la butée. Cette démarche permet de déterminer des facteurs correctifs à appliquer au calcul de la butée d'un écran isolé en fonction du rapport de la distance entre écrans à leur fiche.

  13. THE AXISYMMETRIC FREE-CONVECTION HEAT TRANSFER ALONG A VERTICAL THIN CYLINDER WITH CONSTANT SURFACE TEMPERATURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viskanta, R.

    1963-01-01

    Laminar free-convection flow produced by a heated, vertical, circular cylinder for which the temperature at the outer surface of the cylinder is assumed to be uniform is analyzed. The solution of the boundary-layer equations was obtained by the perturbation method of Sparrow and Gregg, which is valid only for small values of the axial distance parameter xi ; and the integral method of Hama et al., for large values of the parameter xi . Heat-transfer results were calculated for Prandtl numbers (Pr) of 100, the Nusselt numbers (Nu) for the cylinder were higher than those for the flat plate, andmore » this difference increased as Pr decreased. It was also found that the perturbation method of solution of the free-convection boundary-layer equations becomes useless for small values of Pr because of the slow convergence of the series. The results obtained by the integral method were in good agreement with those calculated by the perturbation method for Pr approximately 1 and 0.1 < xi < 1 only; they deviated considerably for smaller values of xi . (auth)« less

  14. Theoretical investigation of the weak interaction between graphene and alcohol solvents

    NASA Astrophysics Data System (ADS)

    Wang, Haining; Chen, Sian; Lu, Shanfu; Xiang, Yan

    2017-05-01

    The dispersion of graphene in five different alcohol solvents was investigated by evaluating the binding energy between graphene and alcohol molecules using DFT-D method. The calculation showed the most stable binding energy appeared at the distance of ∼3.5 Å between graphene and alcohol molecules and increased linearly as changing the alcohol from methanol to 1-pentanol. The weak interaction was further graphically illustrated using the reduced density gradient method. The theoretical study revealed alcohols with more carbon atoms could be a good starting point for screening suitable solvents for graphene dispersion.

  15. Optical ranging and communication method based on all-phase FFT

    NASA Astrophysics Data System (ADS)

    Li, Zening; Chen, Gang

    2014-10-01

    This paper describes an optical ranging and communication method based on all-phase fast fourier transform (FFT). This kind of system is mainly designed for vehicle safety application. Particularly, the phase shift of the reflecting orthogonal frequency division multiplexing (OFDM) symbol is measured to determine the signal time of flight. Then the distance is calculated according to the time of flight. Several key factors affecting the phase measurement accuracy are studied. The all-phase FFT, which can reduce the effects of frequency offset, phase noise and the inter-carrier interference (ICI), is applied to measure the OFDM symbol phase shift.

  16. Unsteady aerodynamic characterization of a military aircraft in vertical gusts

    NASA Technical Reports Server (NTRS)

    Lebozec, A.; Cocquerez, J. L.

    1985-01-01

    The effects of 2.5-m/sec vertical gusts on the flight characteristics of a 1:8.6 scale model of a Mirage 2000 aircraft in free flight at 35 m/sec over a distance of 30 m are investigated. The wind-tunnel setup and instrumentation are described; the impulse-response and local-coefficient-identification analysis methods applied are discussed in detail; and the modification and calibration of the gust-detection probes are reviewed. The results are presented in graphs, and good general agreement is obtained between model calculations using the two analysis methods and the experimental measurements.

  17. System and method for knowledge based matching of users in a network

    DOEpatents

    Verspoor, Cornelia Maria [Santa Fe, NM; Sims, Benjamin Hayden [Los Alamos, NM; Ambrosiano, John Joseph [Los Alamos, NM; Cleland, Timothy James [Los Alamos, NM

    2011-04-26

    A knowledge-based system and methods to matchmaking and social network extension are disclosed. The system is configured to allow users to specify knowledge profiles, which are collections of concepts that indicate a certain topic or area of interest selected from an. The system utilizes the knowledge model as the semantic space within which to compare similarities in user interests. The knowledge model is hierarchical so that indications of interest in specific concepts automatically imply interest in more general concept. Similarity measures between profiles may then be calculated based on suitable distance formulas within this space.

  18. Dose Calculation For Accidental Release Of Radioactive Cloud Passing Over Jeddah

    NASA Astrophysics Data System (ADS)

    Alharbi, N. D.; Mayhoub, A. B.

    2011-12-01

    For the evaluation of doses after the reactor accident, in particular for the inhalation dose, a thorough knowledge of the concentration of the various radionuclide in air during the passage of the plume is required. In this paper we present an application of the Gaussian Plume Model (GPM) to calculate the atmospheric dispersion and airborne radionuclide concentration resulting from radioactive cloud over the city of Jeddah (KSA). The radioactive cloud is assumed to be emitted from a reactor of 10 MW power in postulated accidental release. Committed effective doses (CEDs) to the public at different distance from the source to the receptor are calculated. The calculations were based on meteorological condition and data of the Jeddah site. These data are: pasquill atmospheric stability is the class B and the wind speed is 2.4m/s at 10m height in the N direction. The residence time of some radionuclides considered in this study were calculated. The results indicate that, the values of doses first increase with distance, reach a maximum value and then gradually decrease. The total dose received by human is estimated by using the estimated values of residence time of each radioactive pollutant at different distances.

  19. Automobile Stopping Distances.

    ERIC Educational Resources Information Center

    Logue, L. J.

    1979-01-01

    Discusses the effect of vehicle mass on stopping distances. Analyzes an example of a sample vehicle and tire, and calculates the braking acceleration showing the effect of different factors on the stopping performance of the tires. (GA)

  20. VizieR Online Data Catalog: 25 parsec local white dwarf population (Holberg+, 2016)

    NASA Astrophysics Data System (ADS)

    Holberg, J. B.; Oswalt, T. D.; Sion, E. M.; McCook, G. P.

    2018-02-01

    Table 1 presents the basic properties of the 232 WDs in the LS25 identified by WD number and alternate name. Existing multiband photometry for each star in our LS25 sample is listed in Table 2. Table 3 provides the adapted distances calculated from the trigonometric parallaxes (see Table 1) or photometric distances calculated from the adapted Teff and logg photometry in Table 2. (3 data files).

Top