A New Computational Method to Fit the Weighted Euclidean Distance Model.
ERIC Educational Resources Information Center
De Leeuw, Jan; Pruzansky, Sandra
1978-01-01
A computational method for weighted euclidean distance scaling (a method of multidimensional scaling) which combines aspects of an "analytic" solution with an approach using loss functions is presented. (Author/JKS)
a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud
NASA Astrophysics Data System (ADS)
Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng
2016-06-01
This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
A new statistical distance scale for planetary nebulae
NASA Astrophysics Data System (ADS)
Ali, Alaa; Ismail, H. A.; Alsolami, Z.
2015-05-01
In the first part of the present article we discuss the consistency among different individual distance methods of Galactic planetary nebulae, while in the second part we develop a new statistical distance scale based on a calibrating sample of well determined distances. A set composed of 315 planetary nebulae with individual distances are extracted from the literature. Inspecting the data set indicates that the accuracy of distances is varying among different individual methods and also among different sources where the same individual method was applied. Therefore, we derive a reliable weighted mean distance for each object by considering the influence of the distance error and the weight of each individual method. The results reveal that the discussed individual methods are consistent with each other, except the gravity method that produces higher distances compared to other individual methods. From the initial data set, we construct a standard calibrating sample consists of 82 objects. This sample is restricted only to the objects with distances determined from at least two different individual methods, except few objects with trusted distances determined from the trigonometric, spectroscopic, and cluster membership methods. In addition to the well determined distances for this sample, it shows a lot of advantages over that used in the prior distance scales. This sample is used to recalibrate the mass-radius and radio surface brightness temperature-radius relationships. An average error of ˜30 % is estimated for the new distance scale. The newly distance scale is compared with the most widely used statistical scales in literature, where the results show that it is roughly similar to the majority of them within ˜±20 % difference. Furthermore, the new scale yields a weighted mean distance to the Galactic center of 7.6±1.35 kpc, which in good agreement with the very recent measure of Malkin 2013.
High Performance Automatic Character Skinning Based on Projection Distance
NASA Astrophysics Data System (ADS)
Li, Jun; Lin, Feng; Liu, Xiuling; Wang, Hongrui
2018-03-01
Skeleton-driven-deformation methods have been commonly used in the character deformations. The process of painting skin weights for character deformation is a long-winded task requiring manual tweaking. We present a novel method to calculate skinning weights automatically from 3D human geometric model and corresponding skeleton. The method first, groups each mesh vertex of 3D human model to a skeleton bone by the minimum distance from a mesh vertex to each bone. Secondly, calculates each vertex's weights to the adjacent bones by the vertex's projection point distance to the bone joints. Our method's output can not only be applied to any kind of skeleton-driven deformation, but also to motion capture driven (mocap-driven) deformation. Experiments results show that our method not only has strong generality and robustness, but also has high performance.
A novel three-stage distance-based consensus ranking method
NASA Astrophysics Data System (ADS)
Aghayi, Nazila; Tavana, Madjid
2018-05-01
In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.
Gu, Dongxiao; Liang, Changyong; Zhao, Huimin
2017-03-01
We present the implementation and application of a case-based reasoning (CBR) system for breast cancer related diagnoses. By retrieving similar cases in a breast cancer decision support system, oncologists can obtain powerful information or knowledge, complementing their own experiential knowledge, in their medical decision making. We observed two problems in applying standard CBR to this context: the abundance of different types of attributes and the difficulty in eliciting appropriate attribute weights from human experts. We therefore used a distance measure named weighted heterogeneous value distance metric, which can better deal with both continuous and discrete attributes simultaneously than the standard Euclidean distance, and a genetic algorithm for learning the attribute weights involved in this distance measure automatically. We evaluated our CBR system in two case studies, related to benign/malignant tumor prediction and secondary cancer prediction, respectively. Weighted heterogeneous value distance metric with genetic algorithm for weight learning outperformed several alternative attribute matching methods and several classification methods by at least 3.4%, reaching 0.938, 0.883, 0.933, and 0.984 in the first case study, and 0.927, 0.842, 0.939, and 0.989 in the second case study, in terms of accuracy, sensitivity×specificity, F measure, and area under the receiver operating characteristic curve, respectively. The evaluation result indicates the potential of CBR in the breast cancer diagnosis domain. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatiotemporal Interpolation for Environmental Modelling
Susanto, Ferry; de Souza, Paulo; He, Jing
2016-01-01
A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497
Li, Lian-Hui; Mo, Rong
2015-01-01
The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.
Li, Lian-hui; Mo, Rong
2015-01-01
The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility. PMID:26414758
An iterative method for obtaining the optimum lightning location on a spherical surface
NASA Technical Reports Server (NTRS)
Chao, Gao; Qiming, MA
1991-01-01
A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.
DD-HDS: A method for visualization and exploration of high-dimensional data.
Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard
2007-09-01
Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.
Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.
Steel, Ruth Irene
2015-01-01
Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.
A Comparison of Weights Matrices on Computation of Dengue Spatial Autocorrelation
NASA Astrophysics Data System (ADS)
Suryowati, K.; Bekti, R. D.; Faradila, A.
2018-04-01
Spatial autocorrelation is one of spatial analysis to identify patterns of relationship or correlation between locations. This method is very important to get information on the dispersal patterns characteristic of a region and linkages between locations. In this study, it applied on the incidence of Dengue Hemorrhagic Fever (DHF) in 17 sub districts in Sleman, Daerah Istimewa Yogyakarta Province. The link among location indicated by a spatial weight matrix. It describe the structure of neighbouring and reflects the spatial influence. According to the spatial data, type of weighting matrix can be divided into two types: point type (distance) and the neighbourhood area (contiguity). Selection weighting function is one determinant of the results of the spatial analysis. This study use queen contiguity based on first order neighbour weights, queen contiguity based on second order neighbour weights, and inverse distance weights. Queen contiguity first order and inverse distance weights shows that there is the significance spatial autocorrelation in DHF, but not by queen contiguity second order. Queen contiguity first and second order compute 68 and 86 neighbour list
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J; Gu, X; Lu, W
Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidatemore » and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will provide dosimetrically important label selection strategy in multi-atlas segmentation. CPRIT grant RP150485.« less
Cardiovascular responses to static exercise in distance runners and weight lifters
NASA Technical Reports Server (NTRS)
Longhurst, J. C.; Kelly, A. R.; Gonyea, W. J.; Mitchell, J. H.
1980-01-01
Three groups of athletes including long-distance runners, competitive and amateur weight lifters, and age- and sex-matched control subjects have been studied by hemodynamic and echocardiographic methods in order to determine the effect of the training programs on the cardiovascular response to static exercise. Blood pressure, heart rate, and double product data at rest and at fatigue suggest that competitive endurance (dynamic exercise) training alters the cardiovascular response to static exercise. In contrast to endurance exercise, weight lifting (static exercise) training does not alter the cardiovascular response to static exercise: weight lifters responded to static exercise in a manner very similar to that of the control subjects.
Amirpour Haredasht, Sara; Polson, Dale; Main, Rodger; Lee, Kyuyoung; Holtkamp, Derald; Martínez-López, Beatriz
2017-06-07
Porcine reproductive and respiratory syndrome (PRRS) is one of the most economically devastating infectious diseases for the swine industry. A better understanding of the disease dynamics and the transmission pathways under diverse epidemiological scenarios is a key for the successful PRRS control and elimination in endemic settings. In this paper we used a two step parameter-driven (PD) Bayesian approach to model the spatio-temporal dynamics of PRRS and predict the PRRS status on farm in subsequent time periods in an endemic setting in the US. For such purpose we used information from a production system with 124 pig sites that reported 237 PRRS cases from 2012 to 2015 and from which the pig trade network and geographical location of farms (i.e., distance was used as a proxy of airborne transmission) was available. We estimated five PD models with different weights namely: (i) geographical distance weight which contains the inverse distance between each pair of farms in kilometers, (ii) pig trade weight (PT ji ) which contains the absolute number of pig movements between each pair of farms, (iii) the product between the distance weight and the standardized relative pig trade weight, (iv) the product between the standardized distance weight and the standardized relative pig trade weight, and (v) the product of the distance weight and the pig trade weight. The model that included the pig trade weight matrix provided the best fit to model the dynamics of PRRS cases on a 6-month basis from 2012 to 2015 and was able to predict PRRS outbreaks in the subsequent time period with an area under the ROC curve (AUC) of 0.88 and the accuracy of 85% (105/124). The result of this study reinforces the importance of pig trade in PRRS transmission in the US. Methods and results of this study may be easily adapted to any production system to characterize the PRRS dynamics under diverse epidemic settings to more timely support decision-making.
Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.
Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong
2017-01-01
Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.
Classification of Company Performance using Weighted Probabilistic Neural Network
NASA Astrophysics Data System (ADS)
Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi
2018-05-01
Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.
Alves, E O S; Cerqueira-Silva, C B M; Souza, A M; Santos, C A F; Lima Neto, F P; Corrêa, R X
2012-03-14
We investigated seven distance measures in a set of observations of physicochemical variables of mango (Mangifera indica) submitted to multivariate analyses (distance, projection and grouping). To estimate the distance measurements, five mango progeny (total of 25 genotypes) were analyzed, using six fruit physicochemical descriptors (fruit weight, equatorial diameter, longitudinal diameter, total soluble solids in °Brix, total titratable acidity, and pH). The distance measurements were compared by the Spearman correlation test, projection in two-dimensional space and grouping efficiency. The Spearman correlation coefficients between the seven distance measurements were, except for the Mahalanobis' generalized distance (0.41 ≤ rs ≤ 0.63), high and significant (rs ≥ 0.91; P < 0.001). Regardless of the origin of the distance matrix, the unweighted pair group method with arithmetic mean grouping method proved to be the most adequate. The various distance measurements and grouping methods gave different values for distortion (-116.5 ≤ D ≤ 74.5), cophenetic correlation (0.26 ≤ rc ≤ 0.76) and stress (-1.9 ≤ S ≤ 58.9). Choice of distance measurement and analysis methods influence the.
Missing value imputation for gene expression data by tailored nearest neighbors.
Faisal, Shahla; Tutz, Gerhard
2017-04-25
High dimensional data like gene expression and RNA-sequences often contain missing values. The subsequent analysis and results based on these incomplete data can suffer strongly from the presence of these missing values. Several approaches to imputation of missing values in gene expression data have been developed but the task is difficult due to the high dimensionality (number of genes) of the data. Here an imputation procedure is proposed that uses weighted nearest neighbors. Instead of using nearest neighbors defined by a distance that includes all genes the distance is computed for genes that are apt to contribute to the accuracy of imputed values. The method aims at avoiding the curse of dimensionality, which typically occurs if local methods as nearest neighbors are applied in high dimensional settings. The proposed weighted nearest neighbors algorithm is compared to existing missing value imputation techniques like mean imputation, KNNimpute and the recently proposed imputation by random forests. We use RNA-sequence and microarray data from studies on human cancer to compare the performance of the methods. The results from simulations as well as real studies show that the weighted distance procedure can successfully handle missing values for high dimensional data structures where the number of predictors is larger than the number of samples. The method typically outperforms the considered competitors.
Tree-average distances on certain phylogenetic networks have their weights uniquely determined.
Willson, Stephen J
2012-01-01
A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child.
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Sjöberg, C; Ahnesjö, A
2013-06-01
Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Ming-Wei; Chen, Yi-Chun
2014-02-01
In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.
Improving record linkage performance in the presence of missing linkage data.
Ong, Toan C; Mannino, Michael V; Schilling, Lisa M; Kahn, Michael G
2014-12-01
Existing record linkage methods do not handle missing linking field values in an efficient and effective manner. The objective of this study is to investigate three novel methods for improving the accuracy and efficiency of record linkage when record linkage fields have missing values. By extending the Fellegi-Sunter scoring implementations available in the open-source Fine-grained Record Linkage (FRIL) software system we developed three novel methods to solve the missing data problem in record linkage, which we refer to as: Weight Redistribution, Distance Imputation, and Linkage Expansion. Weight Redistribution removes fields with missing data from the set of quasi-identifiers and redistributes the weight from the missing attribute based on relative proportions across the remaining available linkage fields. Distance Imputation imputes the distance between the missing data fields rather than imputing the missing data value. Linkage Expansion adds previously considered non-linkage fields to the linkage field set to compensate for the missing information in a linkage field. We tested the linkage methods using simulated data sets with varying field value corruption rates. The methods developed had sensitivity ranging from .895 to .992 and positive predictive values (PPV) ranging from .865 to 1 in data sets with low corruption rates. Increased corruption rates lead to decreased sensitivity for all methods. These new record linkage algorithms show promise in terms of accuracy and efficiency and may be valuable for combining large data sets at the patient level to support biomedical and clinical research. Copyright © 2014 Elsevier Inc. All rights reserved.
A variational dynamic programming approach to robot-path planning with a distance-safety criterion
NASA Technical Reports Server (NTRS)
Suh, Suk-Hwan; Shin, Kang G.
1988-01-01
An approach to robot-path planning is developed by considering both the traveling distance and the safety of the robot. A computationally-efficient algorithm is developed to find a near-optimal path with a weighted distance-safety criterion by using a variational calculus and dynamic programming (VCDP) method. The algorithm is readily applicable to any factory environment by representing the free workspace as channels. A method for deriving these channels is also proposed. Although it is developed mainly for two-dimensional problems, this method can be easily extended to a class of three-dimensional problems. Numerical examples are presented to demonstrate the utility and power of this method.
Albanese, B.; Angermeier, P.L.; Gowan, C.
2003-01-01
Mark-recapture studies generate biased, or distance-weighted, movement data because short distances are sampled more frequently than long distances. Using models and field data, we determined how study design affects distance weighting and the movement distributions of stream fishes. We first modeled distance weighting as a function of recapture section length in an unbranching stream. The addition of an unsampled tributary to one of these models substantially increased distance weighting by decreasing the percentage of upstream distances that were sampled. Similarly, the presence of multiple tributaries in the field study resulted in severe bias. However, increasing recapture section length strongly affected distance weighting in both the model and the field study, producing a zone where the number of fish moving could be estimated with little bias. Subsampled data from the field study indicated that longer median (three of three species) and maximum distances (two of three species) can be detected by increasing the length of the recapture section. The effect was extreme for bluehead chub Nocomis leptocephalus, a highly mobile species, which exhibited a longer median distance (133 m versus 60 m), a longer maximum distance (1,144 m versus 708 m), and a distance distribution that differed in shape when the full (4,123-m recapture section) and subsampled (1,978-m recapture section) data sets were compared. Correction factors that adjust the observed number of movements to undersampled distances upwards and those to oversampled distances downwards could not mitigate the distance weighting imposed by the shorter recapture section. Future studies should identify the spatial scale over which movements can be accurately measured before data are collected. Increasing recapture section length a priori is far superior to using post hoc correction factors to reduce the influence of distance weighting on observed distributions. Implementing these strategies will be especially important in stream networks where fish can follow multiple pathways out of the recapture section.
Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk
2018-04-20
Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Paul T.
2006-01-06
Background: Body weight increases with aging. Short-term,longitudinal exercise training studies suggest that increasing exerciseproduces acute weight loss, but it is not clear if the maintenance oflong-term, vigorous exercise attenuates age-related weight gain inproportion to the exercise dose. Methods: Prospective study of 6,119 maleand 2,221 female runners whose running distance changed less than 5 km/wkbetween their baseline and follow-up survey 7 years later. Results: Onaverage, men who ran modest (0-24 km/wk), intermediate (24-48 km/wk) orprolonged distances (>_48 km/wk) all gained weight throughage 64,however, those who ran ?48 km/wk had one-half the average annual weightgain of those who ran<24 km/wk. Age-related weightmore » gain, and itsreduction by running, were both greater in younger than older men. Incontrast, men s gain in waist circumference with age, and its reductionby running, were the same in older and younger men. Women increased theirbody weight and waist and hip circumferences over time, regardless ofage, which was also reduced in proportion to running distance. In bothsexes, running did not attenuate weight gain uniformly, but ratherdisproportionately prevented more extreme increases. Conclusion: Men andwomen who remain vigorously active gain less weight as they age and thereduction is in proportion to the exercise dose.« less
Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang
2016-01-01
The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations. PMID:27171091
Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang
2016-05-10
The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations.
Linear methods for reducing EMG contamination in peripheral nerve motor decodes.
Kagan, Zachary B; Wendelken, Suzanne; Page, David M; Davis, Tyler; Hutchinson, Douglas T; Clark, Gregory A; Warren, David J
2016-08-01
Signals recorded from the peripheral nervous system (PNS) with high channel count penetrating microelectrode arrays, such as the Utah Slanted Electrode Array (USEA), often have electromyographic (EMG) signals contaminating the neural signal. This common-mode signal source may prevent single neural units from successfully being detected, thus hindering motor decode algorithms. Reducing this EMG contamination may lead to more accurate motor decode performance. A virtual reference (VR), created by a weighted linear combination of signals from a subset of all available channels, can be used to reduce this EMG contamination. Four methods of determining individual channel weights and six different methods of selecting subsets of channels were investigated (24 different VR types in total). The methods of determining individual channel weights were equal weighting, regression-based weighting, and two different proximity-based weightings. The subsets of channels were selected by a radius-based criteria, such that a channel was included if it was within a particular radius of inclusion from the target channel. These six radii of inclusion were 1.5, 2.9, 3.2, 5, 8.4, and 12.8 electrode-distances; the 12.8 electrode radius includes all USEA electrodes. We found that application of a VR improves the detectability of neural events via increasing the SNR, but we found no statistically meaningful difference amongst the VR types we examined. The computational complexity of implementation varies with respect to the method of determining channel weights and the number of channels in a subset, but does not correlate with VR performance. Hence, we examined the computational costs of calculating and applying the VR and based on these criteria, we recommend an equal weighting method of assigning weights with a 3.2 electrode-distance radius of inclusion. Further, we found empirically that application of the recommended VR will require less than 1 ms for 33.3 ms of data from one USEA.
Selection vector filter framework
NASA Astrophysics Data System (ADS)
Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.
2003-10-01
We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.
NASA Astrophysics Data System (ADS)
Bakar, Sumarni Abu; Ibrahim, Milbah
2017-08-01
The shortest path problem is a popular problem in graph theory. It is about finding a path with minimum length between a specified pair of vertices. In any network the weight of each edge is usually represented in a form of crisp real number and subsequently the weight is used in the calculation of shortest path problem using deterministic algorithms. However, due to failure, uncertainty is always encountered in practice whereby the weight of edge of the network is uncertain and imprecise. In this paper, a modified algorithm which utilized heuristic shortest path method and fuzzy approach is proposed for solving a network with imprecise arc length. Here, interval number and triangular fuzzy number in representing arc length of the network are considered. The modified algorithm is then applied to a specific example of the Travelling Salesman Problem (TSP). Total shortest distance obtained from this algorithm is then compared with the total distance obtained from traditional nearest neighbour heuristic algorithm. The result shows that the modified algorithm can provide not only on the sequence of visited cities which shown to be similar with traditional approach but it also provides a good measurement of total shortest distance which is lesser as compared to the total shortest distance calculated using traditional approach. Hence, this research could contribute to the enrichment of methods used in solving TSP.
Langarika-Rocafort, Argia; Emparanza, José Ignacio; Aramendi, José F; Castellano, Julen; Calleja-González, Julio
2017-01-01
To examine the intra-observer reliability and agreement between five methods of measurement for dorsiflexion during Weight Bearing Dorsiflexion Lunge Test and to assess the degree of agreement between three methods in female athletes. Repeated measurements study design. Volleyball club. Twenty-five volleyball players. Dorsiflexion was evaluated using five methods: heel-wall distance, first toe-wall distance, inclinometer at tibia, inclinometer at Achilles tendon and the dorsiflexion angle obtained by a simple trigonometric function. For the statistical analysis, agreement was studied using the Bland-Altman method, the Standard Error of Measurement and the Minimum Detectable Change. Reliability analysis was performed using the Intraclass Correlation Coefficient. Measurement methods using the inclinometer had more than 6° of measurement error. The angle calculated by trigonometric function had 3.28° error. The reliability of inclinometer based methods had ICC values < 0.90. Distance based methods and trigonometric angle measurement had an ICC values > 0.90. Concerning the agreement between methods, there was from 1.93° to 14.42° bias, and from 4.24° to 7.96° random error. To assess DF angle in WBLT, the angle calculated by a trigonometric function is the most repeatable method. The methods of measurement cannot be used interchangeably. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yin, Kedong; Wang, Pengyu; Li, Xuemei
2017-12-13
With respect to multi-attribute group decision-making (MAGDM) problems, where attribute values take the form of interval grey trapezoid fuzzy linguistic variables (IGTFLVs) and the weights (including expert and attribute weight) are unknown, improved grey relational MAGDM methods are proposed. First, the concept of IGTFLV, the operational rules, the distance between IGTFLVs, and the projection formula between the two IGTFLV vectors are defined. Second, the expert weights are determined by using the maximum proximity method based on the projection values between the IGTFLV vectors. The attribute weights are determined by the maximum deviation method and the priorities of alternatives are determined by improved grey relational analysis. Finally, an example is given to prove the effectiveness of the proposed method and the flexibility of IGTFLV.
The effect of uncertainties in distance-based ranking methods for multi-criteria decision making
NASA Astrophysics Data System (ADS)
Jaini, Nor I.; Utyuzhnikov, Sergei V.
2017-08-01
Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.
Using the Image Analysis Method for Describing Soil Detachment by a Single Water Drop Impact
Ryżak, Magdalena; Bieganowski, Andrzej
2012-01-01
The aim of the present work was to develop a method based on image analysis for describing soil detachment caused by the impact of a single water drop. The method consisted of recording tracks made by splashed particles on blotting paper under an optical microscope. The analysis facilitated division of the recorded particle tracks on the paper into drops, “comets” and single particles. Additionally, the following relationships were determined: (i) the distances of splash; (ii) the surface areas of splash tracks into relation to distance; (iii) the surface areas of the solid phase transported over a given distance; and (iv) the ratio of the solid phase to the splash track area in relation to distance. Furthermore, the proposed method allowed estimation of the weight of soil transported by a single water drop splash in relation to the distance of the water drop impact. It was concluded that the method of image analysis of splashed particles facilitated analysing the results at very low water drop energy and generated by single water drops.
ERIC Educational Resources Information Center
Tierney, Patrick J.; Moisey, Susan
2014-01-01
This exploratory mixed methods case study examined the use of distance education technology for lifestyle change within the context of obesity treatment and weight management. In the quantitative phase of the study, 19 adults involved in an obesity-related lifestyle change program or change process completed a questionnaire that determined their…
Sensor Drift Compensation Algorithm based on PDF Distance Minimization
NASA Astrophysics Data System (ADS)
Kim, Namyong; Byun, Hyung-Gi; Persaud, Krishna C.; Huh, Jeung-Soo
2009-05-01
In this paper, a new unsupervised classification algorithm is introduced for the compensation of sensor drift effects of the odor sensing system using a conducting polymer sensor array. The proposed method continues updating adaptive Radial Basis Function Network (RBFN) weights in the testing phase based on minimizing Euclidian Distance between two Probability Density Functions (PDFs) of a set of training phase output data and another set of testing phase output data. The output in the testing phase using the fixed weights of the RBFN are significantly dispersed and shifted from each target value due mostly to sensor drift effect. In the experimental results, the output data by the proposed methods are observed to be concentrated closer again to their own target values significantly. This indicates that the proposed method can be effectively applied to improved odor sensing system equipped with the capability of sensor drift effect compensation
ERIC Educational Resources Information Center
Harris, David E.; Blum, Janet Whatley; Bampton, Matthew; O'Brien, Liam M.; Beaudoin, Christina M.; Polacsek, Michele; O'Rourke, Karen A.
2011-01-01
Objective: To examine the relationship between stores selling calorie-dense food near schools and student obesity risk, with the hypothesis that high availability predicts increased risk. Methods: Mail surveys determined height, weight, and calorie-dense food consumption for 552 students at 11 Maine high schools. Driving distance from all food…
ERIC Educational Resources Information Center
Donoghue, John R.
A Monte Carlo study compared the usefulness of six variable weighting methods for cluster analysis. Data were 100 bivariate observations from 2 subgroups, generated according to a finite normal mixture model. Subgroup size, within-group correlation, within-group variance, and distance between subgroup centroids were manipulated. Of the clustering…
Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes
Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2013-01-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563
Real-time interpolation for true 3-dimensional ultrasound image volumes.
Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D
2011-02-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.
Improving the accuracy of k-nearest neighbor using local mean based and distance weight
NASA Astrophysics Data System (ADS)
Syaliman, K. U.; Nababan, E. B.; Sitompul, O. S.
2018-03-01
In k-nearest neighbor (kNN), the determination of classes for new data is normally performed by a simple majority vote system, which may ignore the similarities among data, as well as allowing the occurrence of a double majority class that can lead to misclassification. In this research, we propose an approach to resolve the majority vote issues by calculating the distance weight using a combination of local mean based k-nearest neighbor (LMKNN) and distance weight k-nearest neighbor (DWKNN). The accuracy of results is compared to the accuracy acquired from the original k-NN method using several datasets from the UCI Machine Learning repository, Kaggle and Keel, such as ionosphare, iris, voice genre, lower back pain, and thyroid. In addition, the proposed method is also tested using real data from a public senior high school in city of Tualang, Indonesia. Results shows that the combination of LMKNN and DWKNN was able to increase the classification accuracy of kNN, whereby the average accuracy on test data is 2.45% with the highest increase in accuracy of 3.71% occurring on the lower back pain symptoms dataset. For the real data, the increase in accuracy is obtained as high as 5.16%.
Error Estimation for the Linearized Auto-Localization Algorithm
Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando
2012-01-01
The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965
NASA Technical Reports Server (NTRS)
Larson, T. J.; Schweikhard, W. G.
1974-01-01
A method for evaluating aircraft takeoff performance from brake release to air-phase height that requires fewer tests than conventionally required is evaluated with data for the XB-70 airplane. The method defines the effects of pilot technique on takeoff performance quantitatively, including the decrease in acceleration from drag due to lift. For a given takeoff weight and throttle setting, a single takeoff provides enough data to establish a standardizing relationship for the distance from brake release to any point where velocity is appropriate to rotation. The lower rotation rates penalized takeoff performance in terms of ground roll distance; the lowest observed rotation rate required a ground roll distance that was 19 percent longer than the highest. Rotations at the minimum rate also resulted in lift-off velocities that were approximately 5 knots lower than the highest rotation rate at any given lift-off distance.
New method for distance-based close following safety indicator.
Sharizli, A A; Rahizar, R; Karim, M R; Saifizul, A A
2015-01-01
The increase in the number of fatalities caused by road accidents involving heavy vehicles every year has raised the level of concern and awareness on road safety in developing countries like Malaysia. Changes in the vehicle dynamic characteristics such as gross vehicle weight, travel speed, and vehicle classification will affect a heavy vehicle's braking performance and its ability to stop safely in emergency situations. As such, the aim of this study is to establish a more realistic new distance-based safety indicator called the minimum safe distance gap (MSDG), which incorporates vehicle classification (VC), speed, and gross vehicle weight (GVW). Commercial multibody dynamics simulation software was used to generate braking distance data for various heavy vehicle classes under various loads and speeds. By applying nonlinear regression analysis to the simulation results, a mathematical expression of MSDG has been established. The results show that MSDG is dynamically changed according to GVW, VC, and speed. It is envisaged that this new distance-based safety indicator would provide a more realistic depiction of the real traffic situation for safety analysis.
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
Du, Shaoyi; Xu, Yiting; Wan, Teng; Hu, Huaizhong; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao
2017-01-01
The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm.
Du, Shaoyi; Xu, Yiting; Wan, Teng; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao
2017-01-01
The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm. PMID:29176780
Liu, Dinglin; Zhao, Xianglian
2013-01-01
In an effort to deal with more complicated evaluation situations, scientists have focused their efforts on dynamic comprehensive evaluation research. How to make full use of the subjective and objective information has become one of the noteworthy content. In this paper, a dynamic comprehensive evaluation method with subjective and objective information is proposed. We use the combination weighting method to determine the index weight. Analysis hierarchy process method is applied to dispose the subjective information, and criteria importance through intercriteria correlation method is used to handle the objective information. And for the time weight determination, we consider both time distance and information size to embody the principle of esteeming the present over the past. And then the linear weighted average model is constructed to make the evaluation process more practicable. Finally, an example is presented to illustrate the effectiveness of this method. Overall, the results suggest that the proposed method is reasonable and effective. PMID:24386176
Prostate segmentation in MRI using fused T2-weighted and elastography images
NASA Astrophysics Data System (ADS)
Nir, Guy; Sahebjavaher, Ramin S.; Baghani, Ali; Sinkus, Ralph; Salcudean, Septimiu E.
2014-03-01
Segmentation of the prostate in medical imaging is a challenging and important task for surgical planning and delivery of prostate cancer treatment. Automatic prostate segmentation can improve speed, reproducibility and consistency of the process. In this work, we propose a method for automatic segmentation of the prostate in magnetic resonance elastography (MRE) images. The method utilizes the complementary property of the elastogram and the corresponding T2-weighted image, which are obtained from the phase and magnitude components of the imaging signal, respectively. It follows a variational approach to propagate an active contour model based on the combination of region statistics in the elastogram and the edge map of the T2-weighted image. The method is fast and does not require prior shape information. The proposed algorithm is tested on 35 clinical image pairs from five MRE data sets, and is evaluated in comparison with manual contouring. The mean absolute distance between the automatic and manual contours is 1.8mm, with a maximum distance of 5.6mm. The relative area error is 7.6%, and the duration of the segmentation process is 2s per slice.
H. Li; X. Deng; Andy Dolloff; E. P. Smith
2015-01-01
A novel clustering method for bivariate functional data is proposed to group streams based on their waterâair temperature relationship. A distance measure is developed for bivariate curves by using a time-varying coefficient model and a weighting scheme. This distance is also adjusted by spatial correlation of streams via the variogram. Therefore, the proposed...
Tactile mental body parts representation in obesity.
Scarpina, Federica; Castelnuovo, Gianluca; Molinari, Enrico
2014-12-30
Obese people׳s distortions in visually-based mental body-parts representations have been reported in previous studies, but other sensory modalities have largely been neglected. In the present study, we investigated possible differences in tactilely-based body-parts representation between an obese and a healthy-weight group; additionally we explore the possible relationship between the tactile- and the visually-based body representation. Participants were asked to estimate the distance between two tactile stimuli that were simultaneously administered on the arm or on the abdomen, in the absence of visual input. The visually-based body-parts representation was investigated by a visual imagery method in which subjects were instructed to compare the horizontal extension of body part pairs. According to the results, the obese participants overestimated the size of the tactilely-perceived distances more than the healthy-weight group when the arm, and not the abdomen, was stimulated. Moreover, they reported a lower level of accuracy than did the healthy-weight group when estimating horizontal distances relative to their bodies, confirming an inappropriate visually-based mental body representation. Our results imply that body representation disturbance in obese people is not limited to the visual mental domain, but it spreads to the tactilely perceived distances. The inaccuracy was not a generalized tendency but was body-part related. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Multi-level bandwidth efficient block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1989-01-01
The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.
NASA Astrophysics Data System (ADS)
Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu
2017-10-01
The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.
An Improved Evidential-IOWA Sensor Data Fusion Approach in Fault Diagnosis
Zhou, Deyun; Zhuang, Miaoyan; Fang, Xueyi; Xie, Chunhe
2017-01-01
As an important tool of information fusion, Dempster–Shafer evidence theory is widely applied in handling the uncertain information in fault diagnosis. However, an incorrect result may be obtained if the combined evidence is highly conflicting, which may leads to failure in locating the fault. To deal with the problem, an improved evidential-Induced Ordered Weighted Averaging (IOWA) sensor data fusion approach is proposed in the frame of Dempster–Shafer evidence theory. In the new method, the IOWA operator is used to determine the weight of different sensor data source, while determining the parameter of the IOWA, both the distance of evidence and the belief entropy are taken into consideration. First, based on the global distance of evidence and the global belief entropy, the α value of IOWA is obtained. Simultaneously, a weight vector is given based on the maximum entropy method model. Then, according to IOWA operator, the evidence are modified before applying the Dempster’s combination rule. The proposed method has a better performance in conflict management and fault diagnosis due to the fact that the information volume of each evidence is taken into consideration. A numerical example and a case study in fault diagnosis are presented to show the rationality and efficiency of the proposed method. PMID:28927017
Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert
2015-12-08
Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.
NASA Astrophysics Data System (ADS)
Tarmizi, S. N. M.; Asmat, A.; Sumari, S. M.
2014-02-01
PM10 is one of the air contaminants that can be harmful to human health. Meteorological factors and changes of monsoon season may affect the distribution of these particles. The objective of this study is to determine the temporal and spatial particulate matter (PM10) concentration distribution in Klang Valley, Malaysia by using the Inverse Distance Weighted (IDW) method at different monsoon season and meteorological conditions. PM10 and meteorological data were obtained from the Malaysian Department of Environment (DOE). Particles distribution data were added to the geographic database on a seasonal basis. Temporal and spatial patterns of PM10 concentration distribution were determined by using ArcGIS 9.3. The higher PM10 concentrations are observed during Southwest monsoon season. The values are lower during the Northeast monsoon season. Different monsoon seasons show different meteorological conditions that effect PM10 distribution.
Body-Earth Mover's Distance: A Matching-Based Approach for Sleep Posture Recognition.
Xu, Xiaowei; Lin, Feng; Wang, Aosen; Hu, Yu; Huang, Ming-Chun; Xu, Wenyao
2016-10-01
Sleep posture is a key component in sleep quality assessment and pressure ulcer prevention. Currently, body pressure analysis has been a popular method for sleep posture recognition. In this paper, a matching-based approach, Body-Earth Mover's Distance (BEMD), for sleep posture recognition is proposed. BEMD treats pressure images as weighted 2D shapes, and combines EMD and Euclidean distance for similarity measure. Compared with existing work, sleep posture recognition is achieved with posture similarity rather than multiple features for specific postures. A pilot study is performed with 14 persons for six different postures. The experimental results show that the proposed BEMD can achieve 91.21% accuracy, which outperforms the previous method with an improvement of 8.01%.
A distance-controlled nanoparticle array using PEGylated ferritin
NASA Astrophysics Data System (ADS)
He, Chao; Uenuma, Mutsunori; Okamoto, Naofumi; Kamitake, Hiroki; Ishikawa, Yasuaki; Yamashita, Ichiro; Uraoka, Yukiharu
2014-12-01
A distance-controlled nanoparticle (NP) array was investigated using a simple spin coating process. It was found that the separation distance of NPs was controlled at the nanoscale by using polyethylene glycols (PEGs). Ferritin was used to synthesize NPs and carry them to a substrate by using the different molecular weight of PEGs. In order to control the distance of the NPs, PEGs with molecular weights of 2k, 5k, 10k and 20k were modified on ferritin with 10 mM ion strength and 0.01 mg ml-1 ferritin concentration. The separated distances of NPs increased along with increase in PEG molecular weight.
ERIC Educational Resources Information Center
Helmreich, James E.; Krog, K. Peter
2018-01-01
We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…
End-to-end distance and contour length distribution functions of DNA helices
NASA Astrophysics Data System (ADS)
Zoli, Marco
2018-06-01
I present a computational method to evaluate the end-to-end and the contour length distribution functions of short DNA molecules described by a mesoscopic Hamiltonian. The method generates a large statistical ensemble of possible configurations for each dimer in the sequence, selects the global equilibrium twist conformation for the molecule, and determines the average base pair distances along the molecule backbone. Integrating over the base pair radial and angular fluctuations, I derive the room temperature distribution functions as a function of the sequence length. The obtained values for the most probable end-to-end distance and contour length distance, providing a measure of the global molecule size, are used to examine the DNA flexibility at short length scales. It is found that, also in molecules with less than ˜60 base pairs, coiled configurations maintain a large statistical weight and, consistently, the persistence lengths may be much smaller than in kilo-base DNA.
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
Reference values for the 6-minute walk test in healthy children and adolescents in Switzerland
2013-01-01
Background The six-minute walk test (6MWT) is a simple, low tech, safe and well established, self-paced assessment tool to quantify functional exercise capacity in adults. The definition of normal 6MWT in children is especially demanding since not only parameters like height, weight and ethnical background influence the measurement, but may be as crucial as age and the developmental stage. The aim of this study is establishing reference values for the 6MWT in healthy children and adolescents in Switzerland and to investigate the influence of age, anthropometrics, heart rate, blood pressure and physical activity on the distance walked. Methods Children and adolescents between 5–17 years performed a 6MWT. Short questionnaire assessments about their health state and physical activities. anthropometrics and vitals were measured before and after a 6-minute walk test and were previously defined as secondary outcomes. Results Age, height, weight and the heart rate after the 6MWT all predicted the distance walked according to different regression models: age was the best single predictor and mostly influenced walk distance in younger age, anthropometrics were more important in adolescents and females. Heart rate after the 6MWT was an important distance predictor in addition to age and outreached anthropometrics in the majority of subgroups assessed. Conclusions The 6MWT in children and adolescents is feasible and practical. The 6MWT distance depends mainly on age; however, heart rate after the 6MWT, height and weight significantly add information and should be taken into account mainly in adolescents. Reference equations allow predicting 6-minute walk test distance and may help to better assess and compare outcomes in young patients with cardiovascular and respiratory diseases and are highly warranted for different populations. PMID:23915140
NASA Astrophysics Data System (ADS)
Yin, Yanshu; Feng, Wenjie
2017-12-01
In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.
Extra-galactic Distances with Massive Stars: The Role of Stellar Variability in the Case of M33
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee Chien-Hsiu, E-mail: leech@naoj.org
2017-08-01
In modern cosmology, determining the Hubble constant (H{sub 0}) using a distance ladder to percent level and comparing with the results from the Planck satellite can shed light on the nature of dark energy, physics of the neutrino, and curvature of the universe. Thanks to the endeavor of the SH0ES team, the uncertainty of the H{sub 0} has be dramatically reduced, from 10% to 2.4%, and with the promise of even reaching 1% in the near future. In this regard, it is fundamentally important to investigate the systematics. This is best done using other good independent distance indicators. One promisingmore » method is the flux-weighted gravity luminosity relation (FGLR) of the blue supergiants (BSGs). As BSGs are the brightest objects in galaxies, they can probe distances up to 10 Mpc with negligible blending effects. While the FGLR method delivered distance is in good agreement with other distance indicators, it has been shown that this method delivers greater distances in the cases of M33 and NGC 55. Here, we investigate whether the M33 distance estimate of FGLR suffers systematics from stellar variability. Using CFHT M33 monitoring data, we found that 9 out of 22 BSGs showed variability during the course of 500 days, although with amplitudes as small as 0.05 mag. This suggests that stellar variability plays a negligible role in the FGLR distance determination.« less
NASA Astrophysics Data System (ADS)
Su, Xing; Meng, Xingmin; Ye, Weilin; Wu, Weijiang; Liu, Xingrong; Wei, Wanhong
2018-03-01
Tianshui City is one of the mountainous cities that are threatened by severe geo-hazards in Gansu Province, China. Statistical probability models have been widely used in analyzing and evaluating geo-hazards such as landslide. In this research, three approaches (Certainty Factor Method, Weight of Evidence Method and Information Quantity Method) were adopted to quantitively analyze the relationship between the causative factors and the landslides, respectively. The source data used in this study are including the SRTM DEM and local geological maps in the scale of 1:200,000. 12 causative factors (i.e., altitude, slope, aspect, curvature, plan curvature, profile curvature, roughness, relief amplitude, and distance to rivers, distance to faults, distance to roads, and the stratum lithology) were selected to do correlation analysis after thorough investigation of geological conditions and historical landslides. The results indicate that the outcomes of the three models are fairly consistent.
Evaluation on Cost Overrun Risks of Long-distance Water Diversion Project Based on SPA-IAHP Method
NASA Astrophysics Data System (ADS)
Yuanyue, Yang; Huimin, Li
2018-02-01
Large investment, long route, many change orders and etc. are main causes for costs overrun of long-distance water diversion project. This paper, based on existing research, builds a full-process cost overrun risk evaluation index system for water diversion project, apply SPA-IAHP method to set up cost overrun risk evaluation mode, calculate and rank weight of every risk evaluation indexes. Finally, the cost overrun risks are comprehensively evaluated by calculating linkage measure, and comprehensive risk level is acquired. SPA-IAHP method can accurately evaluate risks, and the reliability is high. By case calculation and verification, it can provide valid cost overrun decision making information to construction companies.
Funane, Tsukasa; Atsumori, Hirokazu; Katura, Takusige; Obata, Akiko N; Sato, Hiroki; Tanikawa, Yukari; Okada, Eiji; Kiguchi, Masashi
2014-01-15
To quantify the effect of absorption changes in the deep tissue (cerebral) and shallow tissue (scalp, skin) layers on functional near-infrared spectroscopy (fNIRS) signals, a method using multi-distance (MD) optodes and independent component analysis (ICA), referred to as the MD-ICA method, is proposed. In previous studies, when the signal from the shallow tissue layer (shallow signal) needs to be eliminated, it was often assumed that the shallow signal had no correlation with the signal from the deep tissue layer (deep signal). In this study, no relationship between the waveforms of deep and shallow signals is assumed, and instead, it is assumed that both signals are linear combinations of multiple signal sources, which allows the inclusion of a "shared component" (such as systemic signals) that is contained in both layers. The method also assumes that the partial optical path length of the shallow layer does not change, whereas that of the deep layer linearly increases along with the increase of the source-detector (S-D) distance. Deep- and shallow-layer contribution ratios of each independent component (IC) are calculated using the dependence of the weight of each IC on the S-D distance. Reconstruction of deep- and shallow-layer signals are performed by the sum of ICs weighted by the deep and shallow contribution ratio. Experimental validation of the principle of this technique was conducted using a dynamic phantom with two absorbing layers. Results showed that our method is effective for evaluating deep-layer contributions even if there are high correlations between deep and shallow signals. Next, we applied the method to fNIRS signals obtained on a human head with 5-, 15-, and 30-mm S-D distances during a verbal fluency task, a verbal working memory task (prefrontal area), a finger tapping task (motor area), and a tetrametric visual checker-board task (occipital area) and then estimated the deep-layer contribution ratio. To evaluate the signal separation performance of our method, we used the correlation coefficients of a laser-Doppler flowmetry (LDF) signal and a nearest 5-mm S-D distance channel signal with the shallow signal. We demonstrated that the shallow signals have a higher temporal correlation with the LDF signals and with the 5-mm S-D distance channel than the deep signals. These results show the MD-ICA method can discriminate between deep and shallow signals. Copyright © 2013 Elsevier Inc. All rights reserved.
A unified tensor level set for image segmentation.
Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong
2010-06-01
This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.
Promotion of Healthy Weight-Control Practices in Young Athletes.
Carl, Rebecca L; Johnson, Miriam D; Martin, Thomas J
2017-09-01
Children and adolescents may participate in sports that favor a particular body type. Some sports, such as gymnastics, dance, and distance running, emphasize a slim or lean physique for aesthetic or performance reasons. Participants in weight-class sports, such as wrestling and martial arts, may attempt weight loss so they can compete at a lower weight class. Other sports, such as football and bodybuilding, highlight a muscular physique; young athletes engaged in these sports may desire to gain weight and muscle mass. This clinical report describes unhealthy methods of weight loss and gain as well as policies and approaches used to curb these practices. The report also reviews healthy strategies for weight loss and weight gain and provides recommendations for pediatricians on how to promote healthy weight control in young athletes. Copyright © 2017 by the American Academy of Pediatrics.
Using synchronous distance-education technology to deliver a weight management intervention.
Dunn, Carolyn; Whetstone, Lauren MacKenzie; Kolasa, Kathryn M; Jayaratne, K S U; Thomas, Cathy; Aggarwal, Surabhi; Nordby, Kelly; Riley, Kenisha E M
2014-01-01
To compare the effectiveness of online delivery of a weight management program using synchronous (real-time), distance-education technology to in-person delivery. Synchronous, distance-education technology was used to conduct weekly sessions for participants with a live instructor. Program effectiveness was indicated by changes in weight, body mass index (BMI), waist circumference, and confidence in ability to eat healthy and be physically active. Online class participants (n = 398) had significantly greater reductions in BMI, weight, and waist circumference than in-person class participants (n = 1,313). Physical activity confidence increased more for in-person than online class participants. There was no difference for healthy eating confidence. This project demonstrates the feasibility of using synchronous distance-education technology to deliver a weight management program. Synchronous online delivery could be employed with no loss to improvements in BMI, weight, and waist circumference. Copyright © 2014 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tazik, E.; Jahantab, Z.; Bakhtiari, M.; Rezaei, A.; Kazem Alavipanah, S.
2014-10-01
Landslides are among the most important natural hazards that lead to modification of the environment. Therefore, studying of this phenomenon is so important in many areas. Because of the climate conditions, geologic, and geomorphologic characteristics of the region, the purpose of this study was landslide hazard assessment using Fuzzy Logic, frequency ratio and Analytical Hierarchy Process method in Dozein basin, Iran. At first, landslides occurred in Dozein basin were identified using aerial photos and field studies. The influenced landslide parameters that were used in this study including slope, aspect, elevation, lithology, precipitation, land cover, distance from fault, distance from road and distance from river were obtained from different sources and maps. Using these factors and the identified landslide, the fuzzy membership values were calculated by frequency ratio. Then to account for the importance of each of the factors in the landslide susceptibility, weights of each factor were determined based on questionnaire and AHP method. Finally, fuzzy map of each factor was multiplied to its weight that obtained using AHP method. At the end, for computing prediction accuracy, the produced map was verified by comparing to existing landslide locations. These results indicate that the combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process method are relatively good estimators of landslide susceptibility in the study area. According to landslide susceptibility map about 51% of the occurred landslide fall into the high and very high susceptibility zones of the landslide susceptibility map, but approximately 26 % of them indeed located in the low and very low susceptibility zones.
Effect of Weight Transfer on a Vehicle's Stopping Distance.
ERIC Educational Resources Information Center
Whitmire, Daniel P.; Alleman, Timothy J.
1979-01-01
An analysis of the minimum stopping distance problem is presented taking into account the effect of weight transfer on nonskidding vehicles and front- or rear-wheels-skidding vehicles. Expressions for the minimum stopping distances are given in terms of vehicle geometry and the coefficients of friction. (Author/BB)
Parameter Set Cloning Based on Catchment Similarity for Large-scale Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Liu, Z.; Kaheil, Y.; McCollum, J.
2016-12-01
Parameter calibration is a crucial step to ensure the accuracy of hydrological models. However, streamflow gauges are not available everywhere for calibrating a large-scale hydrologic model globally. Thus, assigning parameters appropriately for regions where the calibration cannot be performed directly has been a challenge for large-scale hydrologic modeling. Here we propose a method to estimate the model parameters in ungauged regions based on the values obtained through calibration in areas where gauge observations are available. This parameter set cloning is performed according to a catchment similarity index, a weighted sum index based on four catchment characteristic attributes. These attributes are IPCC Climate Zone, Soil Texture, Land Cover, and Topographic Index. The catchments with calibrated parameter values are donors, while the uncalibrated catchments are candidates. Catchment characteristic analyses are first conducted for both donors and candidates. For each attribute, we compute a characteristic distance between donors and candidates. Next, for each candidate, weights are assigned to the four attributes such that higher weights are given to properties that are more directly linked to the hydrologic dominant processes. This will ensure that the parameter set cloning emphasizes the dominant hydrologic process in the region where the candidate is located. The catchment similarity index for each donor - candidate couple is then created as the sum of the weighted distance of the four properties. Finally, parameters are assigned to each candidate from the donor that is "most similar" (i.e. with the shortest weighted distance sum). For validation, we applied the proposed method to catchments where gauge observations are available, and compared simulated streamflows using the parameters cloned by other catchments to the results obtained by calibrating the hydrologic model directly using gauge data. The comparison shows good agreement between the two models for different river basins as we show here. This method has been applied globally to the Hillslope River Routing (HRR) model using gauge observations obtained from the Global Runoff Data Center (GRDC). As next step, more catchment properties can be taken into account to further improve the representation of catchment similarity.
Analysis of rainfall distribution in Kelantan river basin, Malaysia
NASA Astrophysics Data System (ADS)
Che Ros, Faizah; Tosaka, Hiroyuki
2018-03-01
Using rainfall gauge on its own as input carries great uncertainties regarding runoff estimation, especially when the area is large and the rainfall is measured and recorded at irregular spaced gauging stations. Hence spatial interpolation is the key to obtain continuous and orderly rainfall distribution at unknown points to be the input to the rainfall runoff processes for distributed and semi-distributed numerical modelling. It is crucial to study and predict the behaviour of rainfall and river runoff to reduce flood damages of the affected area along the Kelantan river. Thus, a good knowledge on rainfall distribution is essential in early flood prediction studies. Forty six rainfall stations and their daily time-series were used to interpolate gridded rainfall surfaces using inverse-distance weighting (IDW), inverse-distance and elevation weighting (IDEW) methods and average rainfall distribution. Sensitivity analysis for distance and elevation parameters were conducted to see the variation produced. The accuracy of these interpolated datasets was examined using cross-validation assessment.
NASA Astrophysics Data System (ADS)
Liu, Hu-Chen; Liu, Long; Li, Ping
2014-10-01
Failure mode and effects analysis (FMEA) has shown its effectiveness in examining potential failures in products, process, designs or services and has been extensively used for safety and reliability analysis in a wide range of industries. However, its approach to prioritise failure modes through a crisp risk priority number (RPN) has been criticised as having several shortcomings. The aim of this paper is to develop an efficient and comprehensive risk assessment methodology using intuitionistic fuzzy hybrid weighted Euclidean distance (IFHWED) operator to overcome the limitations and improve the effectiveness of the traditional FMEA. The diversified and uncertain assessments given by FMEA team members are treated as linguistic terms expressed in intuitionistic fuzzy numbers (IFNs). Intuitionistic fuzzy weighted averaging (IFWA) operator is used to aggregate the FMEA team members' individual assessments into a group assessment. IFHWED operator is applied thereafter to the prioritisation and selection of failure modes. Particularly, both subjective and objective weights of risk factors are considered during the risk evaluation process. A numerical example for risk assessment is given to illustrate the proposed method finally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conn, A. R.; Parker, Q. A.; Zucker, D. B.
In 'A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)', a new technique was introduced for obtaining distances using the tip of the red giant branch (TRGB) standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributionsmore » (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a three-dimensional view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I-III, V, IX-XXVII, and XXX along with NGC 147, NGC 185, M33, and M31 itself. Of these, the distances to Andromedas XXIV-XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with and without the aforementioned prior are made available to the reader in tabular form. The large object coverage takes advantage of the unprecedented size and photometric depth of the Pan-Andromeda Archaeological Survey. Finally, a preliminary investigation into the satellite density distribution within the halo is made using the obtained distance distributions. For simplicity, this investigation assumes a single power law for the density as a function of radius, with the slope of this power law examined for several subsets of the entire satellite sample.« less
Echocardiographic left ventricular masses in distance runners and weight lifters
NASA Technical Reports Server (NTRS)
Longhurst, J. C.; Gonyea, W. J.; Mitchell, J. H.; Kelly, A. R.
1980-01-01
The relationships of different forms of exercise training to left ventricular mass and body mass are investigated by echocardiographic studies of weight lifters, long-distance runners, and comparatively sized untrained control subjects. Left ventricular mass determinations by the Penn convention reveal increased absolute left ventricular masses in long-distance runners and competitive weight lifters with respect to controls matched for age, body weight, and body surface area, and a significant correlation between ventricular mass and lean body mass. When normalized to lean body mass, the ventricular masses of distance runners are found to be significantly higher than those of the other groups, suggesting that dynamic training elevates left ventricular mass compared to static training and no training, while static training increases ventricular mass only to the extent that lean body mass is increased.
A Latent Class Approach to Fitting the Weighted Euclidean Model, CLASCAL.
ERIC Educational Resources Information Center
Winsberg, Suzanne; De Soete, Geert
1993-01-01
A weighted Euclidean distance model is proposed that incorporates a latent class approach (CLASCAL). The contribution to the distance function between two stimuli is per dimension weighted identically by all subjects in the same latent class. A model selection strategy is proposed and illustrated. (SLD)
The QAP weighted network analysis method and its application in international services trade
NASA Astrophysics Data System (ADS)
Xu, Helian; Cheng, Long
2016-04-01
Based on QAP (Quadratic Assignment Procedure) correlation and complex network theory, this paper puts forward a new method named QAP Weighted Network Analysis Method. The core idea of the method is to analyze influences among relations in a social or economic group by building a QAP weighted network of networks of relations. In the QAP weighted network, a node depicts a relation and an undirect edge exists between any pair of nodes if there is significant correlation between relations. As an application of the QAP weighted network, we study international services trade by using the QAP weighted network, in which nodes depict 10 kinds of services trade relations. After the analysis of international services trade by QAP weighted network, and by using distance indicators, hierarchy tree and minimum spanning tree, the conclusion shows that: Firstly, significant correlation exists in all services trade, and the development of any one service trade will stimulate the other nine. Secondly, as the economic globalization goes deeper, correlations in all services trade have been strengthened continually, and clustering effects exist in those services trade. Thirdly, transportation services trade, computer and information services trade and communication services trade have the most influence and are at the core in all services trade.
Spatial interpolation of river channel topography using the shortest temporal distance
NASA Astrophysics Data System (ADS)
Zhang, Yanjun; Xian, Cuiling; Chen, Huajin; Grieneisen, Michael L.; Liu, Jiaming; Zhang, Minghua
2016-11-01
It is difficult to interpolate river channel topography due to complex anisotropy. As the anisotropy is often caused by river flow, especially the hydrodynamic and transport mechanisms, it is reasonable to incorporate flow velocity into topography interpolator for decreasing the effect of anisotropy. In this study, two new distance metrics defined as the time taken by water flow to travel between two locations are developed, and replace the spatial distance metric or Euclidean distance that is currently used to interpolate topography. One is a shortest temporal distance (STD) metric. The temporal distance (TD) of a path between two nodes is calculated by spatial distance divided by the tangent component of flow velocity along the path, and the STD is searched using the Dijkstra algorithm in all possible paths between two nodes. The other is a modified shortest temporal distance (MSTD) metric in which both the tangent and normal components of flow velocity were combined. They are used to construct the methods for the interpolation of river channel topography. The proposed methods are used to generate the topography of Wuhan Section of Changjiang River and compared with Universal Kriging (UK) and Inverse Distance Weighting (IDW). The results clearly showed that the STD and MSTD based on flow velocity were reliable spatial interpolators. The MSTD, followed by the STD, presents improvement in prediction accuracy relative to both UK and IDW.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braga, V. F.; Bono, G.; Buonanno, R.
2015-02-01
We present new distance determinations to the nearby globular M4 (NGC 6121) based on accurate optical and near-infrared (NIR) mean magnitudes for fundamental (FU) and first overtone (FO) RR Lyrae variables (RRLs), and new empirical optical and NIR period-luminosity (PL) and period-Wesenheit (PW) relations. We have found that optical-NIR and NIR PL and PW relations are affected by smaller standard deviations than optical relations. The difference is the consequence of a steady decrease in the intrinsic spread of cluster RRL apparent magnitudes at fixed period as longer wavelengths are considered. The weighted mean visual apparent magnitude of 44 cluster RRLs ismore » =13.329 ± 0.001 (standard error of the mean) ±0.177 (weighted standard deviation) mag. Distances were estimated using RR Lyr itself to fix the zero-point of the empirical PL and PW relations. Using the entire sample (FU+FO) we found weighted mean true distance moduli of 11.35 ± 0.03 ± 0.05 mag and 11.32 ± 0.02 ± 0.07 mag. Distances were also evaluated using predicted metallicity dependent PLZ and PWZ relations. We found weighted mean true distance moduli of 11.283 ± 0.010 ± 0.018 mag (NIR PLZ) and 11.272 ± 0.005 ± 0.019 mag (optical-NIR and NIR PWZ). The above weighted mean true distance moduli agree within 1σ. The same result is found from distances based on PWZ relations in which the color index is independent of the adopted magnitude (11.272 ± 0.004 ± 0.013 mag). These distances agree quite well with the geometric distance provided by Kaluzny et al. based on three eclipsing binaries. The available evidence indicates that this approach can provide distances to globulars hosting RRLs with a precision better than 2%-3%.« less
Weighted Distances in Scale-Free Configuration Models
NASA Astrophysics Data System (ADS)
Adriaans, Erwin; Komjáthy, Júlia
2018-01-01
In this paper we study first-passage percolation in the configuration model with empirical degree distribution that follows a power-law with exponent τ \\in (2,3) . We assign independent and identically distributed (i.i.d.) weights to the edges of the graph. We investigate the weighted distance (the length of the shortest weighted path) between two uniformly chosen vertices, called typical distances. When the underlying age-dependent branching process approximating the local neighborhoods of vertices is found to produce infinitely many individuals in finite time—called explosive branching process—Baroni, Hofstad and the second author showed in Baroni et al. (J Appl Probab 54(1):146-164, 2017) that typical distances converge in distribution to a bounded random variable. The order of magnitude of typical distances remained open for the τ \\in (2,3) case when the underlying branching process is not explosive. We close this gap by determining the first order of magnitude of typical distances in this regime for arbitrary, not necessary continuous edge-weight distributions that produce a non-explosive age-dependent branching process with infinite mean power-law offspring distributions. This sequence tends to infinity with the amount of vertices, and, by choosing an appropriate weight distribution, can be tuned to be any growing function that is O(log log n) , where n is the number of vertices in the graph. We show that the result remains valid for the the erased configuration model as well, where we delete loops and any second and further edges between two vertices.
Spatial weighting approach in numerical method for disaggregation of MDGs indicators
NASA Astrophysics Data System (ADS)
Permai, S. D.; Mukhaiyar, U.; Satyaning PP, N. L. P.; Soleh, M.; Aini, Q.
2018-03-01
Disaggregation use to separate and classify the data based on certain characteristics or on administrative level. Disaggregated data is very important because some indicators not measured on all characteristics. Detailed disaggregation for development indicators is important to ensure that everyone benefits from development and support better development-related policymaking. This paper aims to explore different methods to disaggregate national employment-to-population ratio indicator to province- and city-level. Numerical approach applied to overcome the problem of disaggregation unavailability by constructing several spatial weight matrices based on the neighbourhood, Euclidean distance and correlation. These methods can potentially be used and further developed to disaggregate development indicators into lower spatial level even by several demographic characteristics.
NASA Astrophysics Data System (ADS)
Shah-Heydari pour, A.; Pahlavani, P.; Bigdeli, B.
2017-09-01
According to the industrialization of cities and the apparent increase in pollutants and greenhouse gases, the importance of forests as the natural lungs of the earth is felt more than ever to clean these pollutants. Annually, a large part of the forests is destroyed due to the lack of timely action during the fire. Knowledge about areas with a high-risk of fire and equipping these areas by constructing access routes and allocating the fire-fighting equipment can help to eliminate the destruction of the forest. In this research, the fire risk of region was forecasted and the risk map of that was provided using MODIS images by applying geographically weighted regression model with Gaussian kernel and ordinary least squares over the effective parameters in forest fire including distance from residential areas, distance from the river, distance from the road, height, slope, aspect, soil type, land use, average temperature, wind speed, and rainfall. After the evaluation, it was found that the geographically weighted regression model with Gaussian kernel forecasted 93.4% of the all fire points properly, however the ordinary least squares method could forecast properly only 66% of the fire points.
Color Filtering Localization for Three-Dimensional Underwater Acoustic Sensor Networks
Liu, Zhihua; Gao, Han; Wang, Wuling; Chang, Shuai; Chen, Jiaxing
2015-01-01
Accurate localization of mobile nodes has been an important and fundamental problem in underwater acoustic sensor networks (UASNs). The detection information returned from a mobile node is meaningful only if its location is known. In this paper, we propose two localization algorithms based on color filtering technology called PCFL and ACFL. PCFL and ACFL aim at collaboratively accomplishing accurate localization of underwater mobile nodes with minimum energy expenditure. They both adopt the overlapping signal region of task anchors which can communicate with the mobile node directly as the current sampling area. PCFL employs the projected distances between each of the task projections and the mobile node, while ACFL adopts the direct distance between each of the task anchors and the mobile node. The proportion factor of distance is also proposed to weight the RGB values. By comparing the nearness degrees of the RGB sequences between the samples and the mobile node, samples can be filtered out. The normalized nearness degrees are considered as the weighted standards to calculate the coordinates of the mobile nodes. The simulation results show that the proposed methods have excellent localization performance and can localize the mobile node in a timely way. The average localization error of PCFL is decreased by about 30.4% compared to the AFLA method. PMID:25774706
Novel method to predict body weight in children based on age and morphological facial features.
Huang, Ziyin; Barrett, Jeffrey S; Barrett, Kyle; Barrett, Ryan; Ng, Chee M
2015-04-01
A new and novel approach of predicting the body weight of children based on age and morphological facial features using a three-layer feed-forward artificial neural network (ANN) model is reported. The model takes in four parameters, including age-based CDC-inferred median body weight and three facial feature distances measured from digital facial images. In this study, thirty-nine volunteer subjects with age ranging from 6-18 years old and BW ranging from 18.6-96.4 kg were used for model development and validation. The final model has a mean prediction error of 0.48, a mean squared error of 18.43, and a coefficient of correlation of 0.94. The model shows significant improvement in prediction accuracy over several age-based body weight prediction methods. Combining with a facial recognition algorithm that can detect, extract and measure the facial features used in this study, mobile applications that incorporate this body weight prediction method may be developed for clinical investigations where access to scales is limited. © 2014, The American College of Clinical Pharmacology.
DeNino, Walter F; Osler, Turner; Evans, Ellen G; Forgione, Patrick M
2010-01-01
Despite the 2008 "American Association of Clinical Endocrinologists, The Obesity Society, and American Society for Metabolic and Bariatric Surgery Medical Guidelines for Clinical Practice for the Perioperative Nutritional, Metabolic, and Nonsurgical Support of the Bariatric Surgery Patient," consensus does not exist for postoperative care in laparoscopic adjustable gastric banding (LAGB) patients (grade D evidence). It has been suggested that regular follow-up is related to better outcomes, specifically greater weight loss. The aim of the present study was to investigate the effects of travel distance to the clinic on the adherence to follow-up visits and weight loss in a cohort of LAGB patients in the setting of a rural, university-affiliated teaching hospital in the United States. A retrospective chart review was performed of all consecutive LAGB patients for a 1-year period. Linear regression analysis was used to identify the relationships between appointment compliance and the distance traveled and between the amount of weight loss and the distance traveled. Linear regression analysis was performed to investigate the effect of the travel distance to the clinic on the percentage of follow-up visits postoperatively. This effect was not significant (P = .4). Linear regression analysis was also performed to elucidate the effect of the travel distance to the clinic on the amount of weight loss. This effect was significant (P = .04). The travel distance to the clinic did not seem to be a significant predictor of compliance in a cohort of LAGB patients with ≤ 1 year of follow-up in a rural setting. However, a weak relationship was found between the travel distance to the clinic and weight loss, with patients who traveled further seeming to lose slightly more weight. Copyright © 2010 American Society for Metabolic and Bariatric Surgery. Published by Elsevier Inc. All rights reserved.
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
A rough set approach for determining weights of decision makers in group decision making.
Yang, Qiang; Du, Ping-An; Wang, Yong; Liang, Bin
2017-01-01
This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs' decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member' decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs' evaluations and selections.
Optimal case-control matching in practice.
Cologne, J B; Shibata, Y
1995-05-01
We illustrate modern matching techniques and discuss practical issues in defining the closeness of matching for retrospective case-control designs (in which the pool of subjects already exists when the study commences). We empirically compare matching on a balancing score, analogous to the propensity score for treated/control matching, with matching on a weighted distance measure. Although both methods in principle produce balance between cases and controls in the marginal distributions of the matching covariates, the weighted distance measure provides better balance in practice because the balancing score can be poorly estimated. We emphasize the use of optimal matching based on efficient network algorithms. An illustration is based on the design of a case-control study of hepatitis B virus infection as a possible confounder and/or effect modifier of radiation-related primary liver cancer in atomic bomb survivors.
NASA Astrophysics Data System (ADS)
Wang, Yan-Jun; Liu, Qun
1999-03-01
Analysis of stock-recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD-based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.
Matsen IV, Frederick A.; Evans, Steven N.
2013-01-01
Principal components analysis (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples taken from a given environment. They have led to many insights regarding the structure of microbial communities. We have developed two new complementary methods that leverage how this microbial community data sits on a phylogenetic tree. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate “average” of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA, the most widely used hierarchical clustering method in this context. We present these methods and illustrate their use with data from the human microbiome. PMID:23505415
Tang, Zheng-Zheng; Chen, Guanhua; Alekseyenko, Alexander V
2016-09-01
Recent advances in sequencing technology have made it possible to obtain high-throughput data on the composition of microbial communities and to study the effects of dysbiosis on the human host. Analysis of pairwise intersample distances quantifies the association between the microbiome diversity and covariates of interest (e.g. environmental factors, clinical outcomes, treatment groups). In the design of these analyses, multiple choices for distance metrics are available. Most distance-based methods, however, use a single distance and are underpowered if the distance is poorly chosen. In addition, distance-based tests cannot flexibly handle confounding variables, which can result in excessive false-positive findings. We derive presence-weighted UniFrac to complement the existing UniFrac distances for more powerful detection of the variation in species richness. We develop PERMANOVA-S, a new distance-based method that tests the association of microbiome composition with any covariates of interest. PERMANOVA-S improves the commonly-used Permutation Multivariate Analysis of Variance (PERMANOVA) test by allowing flexible confounder adjustments and ensembling multiple distances. We conducted extensive simulation studies to evaluate the performance of different distances under various patterns of association. Our simulation studies demonstrate that the power of the test relies on how well the selected distance captures the nature of the association. The PERMANOVA-S unified test combines multiple distances and achieves good power regardless of the patterns of the underlying association. We demonstrate the usefulness of our approach by reanalyzing several real microbiome datasets. miProfile software is freely available at https://medschool.vanderbilt.edu/tang-lab/software/miProfile z.tang@vanderbilt.edu or g.chen@vanderbilt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Tang, Zheng-Zheng; Chen, Guanhua; Alekseyenko, Alexander V.
2016-01-01
Motivation: Recent advances in sequencing technology have made it possible to obtain high-throughput data on the composition of microbial communities and to study the effects of dysbiosis on the human host. Analysis of pairwise intersample distances quantifies the association between the microbiome diversity and covariates of interest (e.g. environmental factors, clinical outcomes, treatment groups). In the design of these analyses, multiple choices for distance metrics are available. Most distance-based methods, however, use a single distance and are underpowered if the distance is poorly chosen. In addition, distance-based tests cannot flexibly handle confounding variables, which can result in excessive false-positive findings. Results: We derive presence-weighted UniFrac to complement the existing UniFrac distances for more powerful detection of the variation in species richness. We develop PERMANOVA-S, a new distance-based method that tests the association of microbiome composition with any covariates of interest. PERMANOVA-S improves the commonly-used Permutation Multivariate Analysis of Variance (PERMANOVA) test by allowing flexible confounder adjustments and ensembling multiple distances. We conducted extensive simulation studies to evaluate the performance of different distances under various patterns of association. Our simulation studies demonstrate that the power of the test relies on how well the selected distance captures the nature of the association. The PERMANOVA-S unified test combines multiple distances and achieves good power regardless of the patterns of the underlying association. We demonstrate the usefulness of our approach by reanalyzing several real microbiome datasets. Availability and Implementation: miProfile software is freely available at https://medschool.vanderbilt.edu/tang-lab/software/miProfile. Contact: z.tang@vanderbilt.edu or g.chen@vanderbilt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27197815
On the rank-distance median of 3 permutations.
Chindelevitch, Leonid; Pereira Zanetti, João Paulo; Meidanis, João
2018-05-08
Recently, Pereira Zanetti, Biller and Meidanis have proposed a new definition of a rearrangement distance between genomes. In this formulation, each genome is represented as a matrix, and the distance d is the rank distance between these matrices. Although defined in terms of matrices, the rank distance is equal to the minimum total weight of a series of weighted operations that leads from one genome to the other, including inversions, translocations, transpositions, and others. The computational complexity of the median-of-three problem according to this distance is currently unknown. The genome matrices are a special kind of permutation matrices, which we study in this paper. In their paper, the authors provide an [Formula: see text] algorithm for determining three candidate medians, prove the tight approximation ratio [Formula: see text], and provide a sufficient condition for their candidates to be true medians. They also conduct some experiments that suggest that their method is accurate on simulated and real data. In this paper, we extend their results and provide the following: Three invariants characterizing the problem of finding the median of 3 matrices A sufficient condition for uniqueness of medians that can be checked in O(n) A faster, [Formula: see text] algorithm for determining the median under this condition A new heuristic algorithm for this problem based on compressed sensing A [Formula: see text] algorithm that exactly solves the problem when the inputs are orthogonal matrices, a class that includes both permutations and genomes as special cases. Our work provides the first proof that, with respect to the rank distance, the problem of finding the median of 3 genomes, as well as the median of 3 permutations, is exactly solvable in polynomial time, a result which should be contrasted with its NP-hardness for the DCJ (double cut-and-join) distance and most other families of genome rearrangement operations. This result, backed by our experimental tests, indicates that the rank distance is a viable alternative to the DCJ distance widely used in genome comparisons.
A Review of Depth and Normal Fusion Algorithms
Štolc, Svorad; Pock, Thomas
2018-01-01
Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903
A comparison of linear interpolation models for iterative CT reconstruction.
Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric
2016-12-01
Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.
Wang, Guanglei; Wang, Pengyu; Han, Yechen; Liu, Xiuling; Li, Yan; Lu, Qian
2017-06-01
In recent years, optical coherence tomography (OCT) has developed into a popular coronary imaging technology at home and abroad. The segmentation of plaque regions in coronary OCT images has great significance for vulnerable plaque recognition and research. In this paper, a new algorithm based on K -means clustering and improved random walk is proposed and Semi-automated segmentation of calcified plaque, fibrotic plaque and lipid pool was achieved. And the weight function of random walk is improved. The distance between the edges of pixels in the image and the seed points is added to the definition of the weight function. It increases the weak edge weights and prevent over-segmentation. Based on the above methods, the OCT images of 9 coronary atherosclerotic patients were selected for plaque segmentation. By contrasting the doctor's manual segmentation results with this method, it was proved that this method had good robustness and accuracy. It is hoped that this method can be helpful for the clinical diagnosis of coronary heart disease.
On the weight of indels in genomic distances
2011-01-01
Background Classical approaches to compute the genomic distance are usually limited to genomes with the same content, without duplicated markers. However, differences in the gene content are frequently observed and can reflect important evolutionary aspects. A few polynomial time algorithms that include genome rearrangements, insertions and deletions (or substitutions) were already proposed. These methods often allow a block of contiguous markers to be inserted, deleted or substituted at once but result in distance functions that do not respect the triangular inequality and hence do not constitute metrics. Results In the present study we discuss the disruption of the triangular inequality in some of the available methods and give a framework to establish an efficient correction for two models recently proposed, one that includes insertions, deletions and double cut and join (DCJ) operations, and one that includes substitutions and DCJ operations. Conclusions We show that the proposed framework establishes the triangular inequality in both distances, by summing a surcharge on indel operations and on substitutions that depends only on the number of markers affected by these operations. This correction can be applied a posteriori, without interfering with the already available formulas to compute these distances. We claim that this correction leads to distances that are biologically more plausible. PMID:22151784
Horie, Tomohiko; Takahara, Tarou; Ogino, Tetsuo; Okuaki, Tomoyuki; Honda, Masatoshi; Okumura, Yasuhiro; Kajihara, Nao; Usui, Keisuke; Muro, Isao; Imai, Yutaka
2008-09-20
In recent years, the utility of body diffusion weighted imaging as represented by diffusion weighted whole body imaging with background body signal suppression (DWIBS), the DWIBS method, is very high. However, there was a problem in the DWIBS method involving the artifact corresponding to the distance of the diaphragm. To provide a solution, the respiratory trigger (RT) method and the navigator echo method were used together. A problem was that scan time extended to the compensation and did not predict the extension rate, although both artifacts were reduced. If we used only navigator real time slice tracking (NRST) from the findings obtained by the DWIBS method, we presumed the artifacts would be ameliorable without the extension of scan time. Thus, the TRacking Only Navigator (TRON) method was developed, and a basic examination was carried out for the liver. An important feature of the TRON method is the lack of the navigator gating window (NGW) and addition of the method of linear interpolation prior to NRST. The method required the passing speed and the distance from the volunteer's diaphragm. The estimated error from the 2D-selective RF pulse (2DSRP) of the TRON method to slice excitation was calculated. The condition of 2D SRP, which did not influence the accuracy of NRST, was required by the movement phantom. The volunteer was scanned, and the evaluation and actual scan time of the image quality were compared with the RT and DWIBS methods. Diaphragm displacement speed and the quantity of displacement were determined in the head and foot directions, and the result was 9 mm/sec, and 15 mm. The estimated error was within 2.5 mm in b-factor 1000 sec/mm(2). The FA of 2DSRP was 15 degrees, and the navigator echo length was 120 mm, which was excellent. In the TRON method, the accuracy of NRST was steady because of line interpolation. The TRON method obtained image quality equal to that of the RT method with the b-factor in the volunteer scanning at short actual scan time. The TRON method can obtain image quality equal to that of the RT method in body diffusion weighted imaging within a short time. Moreover, because scan time during planning becomes actual scan time, inspection can be efficiently executed.
2007-06-01
17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB
Color characterization of cine film
NASA Astrophysics Data System (ADS)
Noriega, Leonardo; Morovic, Jan; MacDonald, Lindsay W.; Lempp, Wolfgang
2002-06-01
This paper describes the characterization of cine film, by identifying the relationship between the Status A density values of positive print film and the XYZ values of conventional colorimetry. Several approaches are tried including least-squares modeling, tetrahedral interpolation, and distance weighted interpolation. The distance weighted technique has been improved by the use of the Mahalanobis distance metric in order to perform the interpolation, and this is presented as an innovation.
ERIC Educational Resources Information Center
Morris, Cody E.; Owens, Scott G.; Waddell, Dwight E.; Bass, Martha A.; Bentley, John P.; Loftin, Mark
2014-01-01
An equation published by Loftin, Waddell, Robinson, and Owens (2010) was cross-validated using ten normal-weight walkers, ten overweight walkers, and ten distance runners. Energy expenditure was measured at preferred walking (normal-weight walker and overweight walkers) or running pace (distance runners) for 5 min and corrected to a mile. Energy…
Measures for improving the zeppelin airships for long distance transportation
NASA Technical Reports Server (NTRS)
Duerr, L. F.
1980-01-01
Factors to be considered in the construction of dirigibles include the design and weight of support structures, static and aerodynamic loads on the main ring, the annealing of support materials, and the dynamic gas pressure. Adaptations made for using helium as the lifting gas, and a method for extracting ballast are described.
USDA-ARS?s Scientific Manuscript database
Mitochondria are essential subcellular organelles found in eukaryotic cells. Knowing information on a protein’s subcellular or sub subcellular location provides in-depth insights about the microenvironment where it interacts with other molecules and is crucial for inferring the protein’s function. T...
A novel iris patterns matching algorithm of weighted polar frequency correlation
NASA Astrophysics Data System (ADS)
Zhao, Weijie; Jiang, Linhua
2014-11-01
Iris recognition is recognized as one of the most accurate techniques for biometric authentication. In this paper, we present a novel correlation method - Weighted Polar Frequency Correlation(WPFC) - to match and evaluate two iris images, actually it can also be used for evaluating the similarity of any two images. The WPFC method is a novel matching and evaluating method for iris image matching, which is complete different from the conventional methods. For instance, the classical John Daugman's method of iris recognition uses 2D Gabor wavelets to extract features of iris image into a compact bit stream, and then matching two bit streams with hamming distance. Our new method is based on the correlation in the polar coordinate system in frequency domain with regulated weights. The new method is motivated by the observation that the pattern of iris that contains far more information for recognition is fine structure at high frequency other than the gross shapes of iris images. Therefore, we transform iris images into frequency domain and set different weights to frequencies. Then calculate the correlation of two iris images in frequency domain. We evaluate the iris images by summing the discrete correlation values with regulated weights, comparing the value with preset threshold to tell whether these two iris images are captured from the same person or not. Experiments are carried out on both CASIA database and self-obtained images. The results show that our method is functional and reliable. Our method provides a new prospect for iris recognition system.
An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion
Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng
2015-01-01
The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278
An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.
Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng
2015-08-31
The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.
Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline
2014-01-01
In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248
NASA Astrophysics Data System (ADS)
Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan
2018-06-01
This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.
Pushing and pulling in relation to musculoskeletal disorders: a review of risk factors.
Hoozemans, M J; van der Beek, A J; Frings-Dresen, M H; van Dijk, F J; van der Woude, L H
1998-06-01
The objective was to review the literature on risk factors for musculoskeletal disorders related to pushing and pulling. The risk factors have been described and evaluated from four perspectives: epidemiology, psychophysics, physiology, and biomechanics. Epidemiological studies have shown, based on cross-sectional data, that pushing and pulling is associated with low back pain. Evidence with respect to complaints of other parts of the musculoskeletal system is lacking. Risk factors have been found to influence the maximum (acceptable) push or pull forces as well as the physiological and mechanical strain on the human body. The risk factors have been divided into: (a) work situation, such as distance, frequency, handle height, and cart weight, (b) actual working method and posture/movement/exerted forces, such as foot distance and velocity, and (c) worker's characteristics, such as body weight. Longitudinal epidemiological studies are needed to relate pushing and pulling to musculoskeletal disorders.
RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING
Liu, Meizhu; Vemuri, Baba C.
2011-01-01
Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) – used to represent the distribution over the training data and the classification error respectively – to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643
Baek, Jonggyu; Sanchez-Vaznaugh, Emma V.; Sánchez, Brisa N.
2016-01-01
It is well known that associations between features of the built environment and health depend on the geographic scale used to construct environmental attributes. In the built environment literature, it has long been argued that geographic scales may vary across study locations. However, this hypothesized variation has not been systematically examined due to a lack of available statistical methods. We propose a hierarchical distributed-lag model (HDLM) for estimating the underlying overall shape of food environment–health associations as a function of distance from locations of interest. This method enables indirect assessment of relevant geographic scales and captures area-level heterogeneity in the magnitudes of associations, along with relevant distances within areas. The proposed model was used to systematically examine area-level variation in the association between availability of convenience stores around schools and children's weights. For this case study, body mass index (weight kg)/height (m)2) z scores (BMIz) for 7th grade children collected via California's 2001–2009 FitnessGram testing program were linked to a commercial database that contained locations of food outlets statewide. Findings suggested that convenience store availability may influence BMIz only in some places and at varying distances from schools. Future research should examine localized environmental or policy differences that may explain the heterogeneity in convenience store–BMIz associations. PMID:26888753
Hanigan, Ivan; Hall, Gillian; Dear, Keith B G
2006-09-13
To explain the possible effects of exposure to weather conditions on population health outcomes, weather data need to be calculated at a level in space and time that is appropriate for the health data. There are various ways of estimating exposure values from raw data collected at weather stations but the rationale for using one technique rather than another; the significance of the difference in the values obtained; and the effect these have on a research question are factors often not explicitly considered. In this study we compare different techniques for allocating weather data observations to small geographical areas and different options for weighting averages of these observations when calculating estimates of daily precipitation and temperature for Australian Postal Areas. Options that weight observations based on distance from population centroids and population size are more computationally intensive but give estimates that conceptually are more closely related to the experience of the population. Options based on values derived from sites internal to postal areas, or from nearest neighbour sites--that is, using proximity polygons around weather stations intersected with postal areas--tended to include fewer stations' observations in their estimates, and missing values were common. Options based on observations from stations within 50 kilometres radius of centroids and weighting of data by distance from centroids gave more complete estimates. Using the geographic centroid of the postal area gave estimates that differed slightly from the population weighted centroids and the population weighted average of sub-unit estimates. To calculate daily weather exposure values for analysis of health outcome data for small areas, the use of data from weather stations internal to the area only, or from neighbouring weather stations (allocated by the use of proximity polygons), is too limited. The most appropriate method conceptually is the use of weather data from sites within 50 kilometres radius of the area weighted to population centres, but a simpler acceptable option is to weight to the geographic centroid.
PARAMETRIC DISTANCE WEIGHTING OF LANDSCAPE INFLUENCE ON STREAMS
We present a parametric model for estimating the areas within watersheds whose land use best predicts indicators of stream ecological condition. We regress a stream response variable on the distance-weighted proportion of watershed area that has a specific land use, such as agric...
Salient object detection based on discriminative boundary and multiple cues integration
NASA Astrophysics Data System (ADS)
Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei
2016-01-01
In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.
Li, Longxiang; Gong, Jianhua; Zhou, Jieping
2014-01-01
Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health. PMID:24798197
Li, Longxiang; Gong, Jianhua; Zhou, Jieping
2014-01-01
Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.
Evaluation of Spontaneous Spinal Cerebrospinal Fluid Leaks Disease by Computerized Image Processing.
Yıldırım, Mustafa S; Kara, Sadık; Albayram, Mehmet S; Okkesim, Şükrü
2016-05-17
Spontaneous Spinal Cerebrospinal Fluid Leaks (SSCFL) is a disease based on tears on the dura mater. Due to widespread symptoms and low frequency of the disease, diagnosis is problematic. Diagnostic lumbar puncture is commonly used for diagnosing SSCFL, though it is invasive and may cause pain, inflammation or new leakages. T2-weighted MR imaging is also used for diagnosis; however, the literature on T2-weighted MRI states that findings for diagnosis of SSCFL could be erroneous when differentiating the diseased and control. One another technique for diagnosis is CT-myelography, but this has been suggested to be less successful than T2-weighted MRI and it needs an initial lumbar puncture. This study aimed to develop an objective, computerized numerical analysis method using noninvasive routine Magnetic Resonance Images that can be used in the evaluation and diagnosis of SSCFL disease. Brain boundaries were automatically detected using methods of mathematical morphology, and a distance transform was employed. According to normalized distances, average densities of certain sites were proportioned and a numerical criterion related to cerebrospinal fluid distribution was calculated. The developed method was able to differentiate between 14 patients and 14 control subjects significantly with p = 0.0088 and d = 0.958. Also, the pre and post-treatment MRI of four patients was obtained and analyzed. The results were differentiated statistically (p = 0.0320, d = 0.853). An original, noninvasive and objective diagnostic test based on computerized image processing has been developed for evaluation of SSCFL. To our knowledge, this is the first computerized image processing method for evaluation of the disease. Discrimination between patients and controls shows the validity of the method. Also, post-treatment changes observed in four patients support this verdict.
McCormack, Gavin R; Virk, Jagdeep S
2014-09-01
Higher levels of sedentary behavior are associated with adverse health outcomes. Over-reliance on private motor vehicles for transportation is a potential contributor to the obesity epidemic. The objective of this study was to review evidence on the relationship between motor vehicle travel distance and time and weight status among adults. Keywords associated with driving and weight status were entered into four databases (PubMed Medline Transportation Research Information Database and Web of Science) and retrieved article titles and abstracts screened for relevance. Relevant articles were assessed for their eligibility for inclusion in the review (English-language articles a sample ≥ 16 years of age included a measure of time or distance traveling in a motor vehicle and weight status and estimated the association between driving and weight status). The database search yielded 2781 articles, from which 88 were deemed relevant and 10 studies met the inclusion criteria. Of the 10 studies included in the review, 8 found a statistically significant positive association between time and distance traveled in a motor vehicle and weight status. Multilevel interventions that make alternatives to driving private motor vehicles more convenient, such as walking and cycling, are needed to promote healthy weight in the adult population. Copyright © 2014 Elsevier Inc. All rights reserved.
2011-07-28
missions. It was recognized that a failure to remove the trees could result in valuable data loss on Eglin test missions. Four different methods have...4-57 to 4-59) The munitions used on TA C-72 would cause noise levels ofless than 115 P- weighted (impulse sound) decibels, and receptors would not...approximate calculation is: 600 x the cube root of the net explosive weight (NEW) = distance to the reservation boundary (infeet) . No detonation can
Zheng, Guoyan; Chu, Chengwen; Belavý, Daniel L; Ibragimov, Bulat; Korez, Robert; Vrtovec, Tomaž; Hutt, Hugo; Everson, Richard; Meakin, Judith; Andrade, Isabel Lŏpez; Glocker, Ben; Chen, Hao; Dou, Qi; Heng, Pheng-Ann; Wang, Chunliang; Forsberg, Daniel; Neubert, Aleš; Fripp, Jurgen; Urschler, Martin; Stern, Darko; Wimmer, Maria; Novikov, Alexey A; Cheng, Hui; Armbrecht, Gabriele; Felsenberg, Dieter; Li, Shuo
2017-01-01
The evaluation of changes in Intervertebral Discs (IVDs) with 3D Magnetic Resonance (MR) Imaging (MRI) can be of interest for many clinical applications. This paper presents the evaluation of both IVD localization and IVD segmentation methods submitted to the Automatic 3D MRI IVD Localization and Segmentation challenge, held at the 2015 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2015) with an on-site competition. With the construction of a manually annotated reference data set composed of 25 3D T2-weighted MR images acquired from two different studies and the establishment of a standard validation framework, quantitative evaluation was performed to compare the results of methods submitted to the challenge. Experimental results show that overall the best localization method achieves a mean localization distance of 0.8 mm and the best segmentation method achieves a mean Dice of 91.8%, a mean average absolute distance of 1.1 mm and a mean Hausdorff distance of 4.3 mm, respectively. The strengths and drawbacks of each method are discussed, which provides insights into the performance of different IVD localization and segmentation methods. Copyright © 2016 Elsevier B.V. All rights reserved.
A nonparametric multiple imputation approach for missing categorical data.
Zhou, Muhan; He, Yulei; Yu, Mandi; Hsu, Chiu-Hsieh
2017-06-06
Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness) probabilities. We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model) and the other fits a logistic regression for predicting missingness probabilities (the missingness model). A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with more than two levels for assessing the distribution of the outcome. In terms of the choices for the working models, we suggest a multinomial logistic regression for predicting the missing outcome and a binary logistic regression for predicting the missingness probability.
NASA Astrophysics Data System (ADS)
Bae, Young K.
2006-01-01
Formation flying of clusters of micro-, nano- and pico-satellites has been recognized to be more affordable, robust and versatile than building a large monolithic satellite in implementing next generation space missions requiring large apertures or large sample collection areas and sophisticated earth imaging/monitoring. We propose a propellant free, thus contamination free, method that enables ultrahigh precision satellite formation flying with intersatellite distance accuracy of nm (10-9 m) at maximum estimated distances in the order of tens of km. The method is based on ultrahigh precision CW intracavity photon thrusters and tethers. The pushing-out force of the intracavity photon thruster and the pulling-in force of the tether tension between satellites form the basic force structure to stabilize crystalline-like structures of satellites and/or spacecrafts with a relative distance accuracy better than nm. The thrust of the photons can be amplified by up to tens of thousand times by bouncing them between two mirrors located separately on pairing satellites. For example, a 10 W photon thruster, suitable for micro-satellite applications, is theoretically capable of providing thrusts up to mN, and its weight and power consumption are estimated to be several kgs and tens of W, respectively. The dual usage of photon thruster as a precision laser source for the interferometric ranging system further simplifies the system architecture and minimizes the weight and power consumption. The present method does not require propellant, thus provides significant propulsion system mass savings, and is free from propellant exhaust contamination, ideal for missions that require large apertures composed of highly sensitive sensors. The system can be readily scaled down for the nano- and pico-satellite applications.
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †
Murdani, Muhammad Harist; Hong, Bonghee
2018-01-01
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.
Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee
2018-03-24
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.
Application of distance correction to ChemCam laser-induced breakdown spectroscopy measurements
Mezzacappa, A.; Melikechi, N.; Cousin, A.; ...
2016-04-04
Laser-induced breakdown spectroscopy (LIBS) provides chemical information from atomic, ionic, and molecular emissions from which geochemical composition can be deciphered. Analysis of LIBS spectra in cases where targets are observed at different distances, as is the case for the ChemCam instrument on the Mars rover Curiosity, which performs analyses at distances between 2 and 7.4 m is not a simple task. Previously, we showed that spectral distance correction based on a proxy spectroscopic standard created from first-shot dust observations on Mars targets ameliorates the distance bias in multivariate-based elemental-composition predictions of laboratory data. In this work, we correct an expandedmore » set of neutral and ionic spectral emissions for distance bias in the ChemCam data set. By using and testing different selection criteria to generate multiple proxy standards, we find a correction that minimizes the difference in spectral intensity measured at two different distances and increases spectral reproducibility. When the quantitative performance of distance correction is assessed, there is improvement for SiO 2, Al 2O 3, CaO, FeOT, Na 2O, K 2O, that is, for most of the major rock forming elements, and for the total major-element weight percent predicted. But, for MgO the method does not provide improvements while for TiO 2, it yields inconsistent results. Additionally, we observed that many emission lines do not behave consistently with distance, evidenced from laboratory analogue measurements and ChemCam data. This limits the effectiveness of the method.« less
First Principles Modeling of the Performance of a Hydrogen-Peroxide-Driven Chem-E-Car
ERIC Educational Resources Information Center
Farhadi, Maryam; Azadi, Pooya; Zarinpanjeh, Nima
2009-01-01
In this study, performance of a hydrogen-peroxide-driven car has been simulated using basic conservation laws and a few numbers of auxiliary equations. A numerical method was implemented to solve sets of highly non-linear ordinary differential equations. Transient pressure and the corresponding traveled distance for three different car weights are…
Object-based change detection method using refined Markov random field
NASA Astrophysics Data System (ADS)
Peng, Daifeng; Zhang, Yongjun
2017-01-01
In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.
A rough set approach for determining weights of decision makers in group decision making
Yang, Qiang; Du, Ping-an; Wang, Yong; Liang, Bin
2017-01-01
This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs’ decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member’ decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs’ evaluations and selections. PMID:28234974
Distance Magic-Type and Distance Antimagic-Type Labelings of Graphs
NASA Astrophysics Data System (ADS)
Freyberg, Bryan J.
Generally speaking, a distance magic-type labeling of a graph G of order n is a bijection l from the vertex set of the graph to the first n natural numbers or to the elements of a group of order n, with the property that the weight of each vertex is the same. The weight of a vertex x is defined as the sum (or appropriate group operation) of all the labels of vertices adjacent to x. If instead we require that all weights differ, then we refer to the labeling as a distance antimagic-type labeling. This idea can be generalized for directed graphs; the weight will take into consideration the direction of the arcs. In this manuscript, we provide new results for d-handicap labeling, a distance antimagic-type labeling, and introduce a new distance magic-type labeling called orientable Gamma-distance magic labeling. A d-handicap distance antimagic labeling (or just d-handicap labeling for short) of a graph G = ( V,E) of order n is a bijection l from V to the set {1,2,...,n} with induced weight function [special characters omitted]. such that l(xi) = i and the sequence of weights w(x 1),w(x2),...,w (xn) forms an arithmetic sequence with constant difference d at least 1. If a graph G admits a d-handicap labeling, we say G is a d-handicap graph. A d-handicap incomplete tournament, H(n,k,d ) is an incomplete tournament of n teams ranked with the first n natural numbers such that each team plays exactly k games and the strength of schedule of the ith ranked team is d more than the i + 1st ranked team. That is, strength of schedule increases arithmetically with strength of team. Constructing an H(n,k,d) is equivalent to finding a d-handicap labeling of a k-regular graph of order n.. In Chapter 2 we provide general constructions for every d for large classes of both n and k, providing breadfth and depth to the catalog of known H(n,k,d)'s. In Chapters 3 - 6, we introduce a new type of labeling called orientable Gamma-distance magic labeling. Let Gamma be an abelian group of order n. If for a graph G = (V,E) of order n there exists an orientation of the edges of G and a companion bijection from V to Gamma with the property that there is an element mu of Gamma (called the magic constant) such that [special characters omitted] where w(x) is the weight of vertex x, we say that G is orientable Gamma -distance magic. In addition to introducing the concept, we provide numerous results on orientable Zn-distance magic graphs, where Zn is the cyclic group of order n.. In Chapter 7, we summarize the results of this dissertation and provide suggestions for future work.
A revised moving cluster distance to the Pleiades open cluster
NASA Astrophysics Data System (ADS)
Galli, P. A. B.; Moraux, E.; Bouy, H.; Bouvier, J.; Olivares, J.; Teixeira, R.
2017-02-01
Context. The distance to the Pleiades open cluster has been extensively debated in the literature over several decades. Although different methods point to a discrepancy in the trigonometric parallaxes produced by the Hipparcos mission, the number of individual stars with known distances is still small compared to the number of cluster members to help solve this problem. Aims: We provide a new distance estimate for the Pleiades based on the moving cluster method, which will be useful to further discuss the so-called Pleiades distance controversy and compare it with the very precise parallaxes from the Gaia space mission. Methods: We apply a refurbished implementation of the convergent point search method to an updated census of Pleiades stars to calculate the convergent point position of the cluster from stellar proper motions. Then, we derive individual parallaxes for 64 cluster members using radial velocities compiled from the literature, and approximate parallaxes for another 1146 stars based on the spatial velocity of the cluster. This represents the largest sample of Pleiades stars with individual distances to date. Results: The parallaxes derived in this work are in good agreement with previous results obtained in different studies (excluding Hipparcos) for individual stars in the cluster. We report a mean parallax of 7.44 ± 0.08 mas and distance of pc that is consistent with the weighted mean of 135.0 ± 0.6 pc obtained from the non-Hipparcos results in the literature. Conclusions: Our result for the distance to the Pleiades open cluster is not consistent with the Hipparcos catalog, but favors the recent and more precise distance determination of 136.2 ± 1.2 pc obtained from Very Long Baseline Interferometry observations. It is also in good agreement with the mean distance of 133 ± 5 pc obtained from the first trigonometric parallaxes delivered by the Gaia satellite for the brightest cluster members in common with our sample. Full Table B.2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A48
NASA Astrophysics Data System (ADS)
Ebisu, Keita; Belanger, Kathleen; Bell, Michelle L.
2014-08-01
Several papers reported associations between airborne fine particulate matter (PM2.5) and birth weight, though findings are inconsistent across studies. Conflicting results might be due to (1) different PM2.5 chemical structure across locations, and (2) various exposure assignment methods across studies even among the studies that use ambient monitors to assess exposure. We investigated associations between birth weight and PM2.5 chemical constituents, considering issues arising from choice of buffer size (i.e. distance between residence and pollution monitor). We estimated the association between each pollutant and term birth weight applying buffers of 5 to 30 km in Connecticut (2000-2006), in the New England region of the USA. We also investigated the implication of the choice of buffer size in relation to population characteristics, such as socioeconomic status. Results indicate that some PM2.5 chemical constituents, such as nitrate, are associated with lower birth weight and appear more harmful than other constituents. However, associations vary with buffer size and the implications of different buffer sizes may differ by pollutant. A homogeneous pollutant level within a certain distance is a common assumption in many environmental epidemiology studies, but the validity of this assumption may vary by pollutant. Furthermore, we found that areas close to monitors reflect more minority and lower socio-economic populations, which implies that different exposure approaches may result in different types of study populations. Our findings demonstrate that choosing an exposure method involves key tradeoffs of the impacts of exposure misclassification, sample size, and population characteristics.
Dai, Jianrong; Que, William
2004-12-07
This paper introduces a method to simultaneously minimize the leaf travel distance and the tongue-and-groove effect for IMRT leaf sequences to be delivered in segmental mode. The basic idea is to add a large enough number of openings through cutting or splitting existing openings for those leaf pairs with openings fewer than the number of segments so that all leaf pairs have the same number of openings. The cutting positions are optimally determined with a simulated annealing technique called adaptive simulated annealing. The optimization goal is set to minimize the weighted summation of the leaf travel distance and tongue-and-groove effect. Its performance was evaluated with 19 beams from three clinical cases; one brain, one head-and-neck and one prostate case. The results show that it can reduce the leaf travel distance and (or) tongue-and-groove effect; the reduction of the leaf travel distance reaches its maximum of about 50% when minimized alone; the reduction of the tongue-and-groove reaches its maximum of about 70% when minimized alone. The maximum reduction in the leaf travel distance translates to a 1 to 2 min reduction in treatment delivery time per fraction, depending on leaf speed. If the method is implemented clinically, it could result in significant savings in treatment delivery time, and also result in significant reduction in the wear-and-tear of MLC mechanics.
Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.
André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2011-01-01
Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.
Sinclair, R C F; Batterham, A M; Davies, S; Cawthorn, L; Danjoux, G R
2012-01-01
For perioperative risk stratification, a robust, practical test could be used where cardiopulmonary exercise testing (CPET) is unavailable. The aim of this study was to assess the utility of the 6 min walk test (6MWT) distance to discriminate between low and high anaerobic threshold (AT) in patients awaiting major non-cardiac surgery. In 110 participants, we obtained oxygen consumption at the AT from CPET and recorded the distance walked (in m) during a 6MWT. Receiver operating characteristic (ROC) curve analysis was used to derive two different cut-points for 6MWT distance in predicting an AT of <11 ml O(2) kg(-1) min(-1); one using the highest sum of sensitivity and specificity (conventional method) and the other adopting a 2:1 weighting in favour of sensitivity. In addition, using a novel linear regression-based technique, we obtained lower and upper cut-points for 6MWT distance that are predictive of an AT that is likely to be (P≥0.75) <11 or >11 ml O(2) kg(-1) min(-1). The ROC curve analysis revealed an area under the curve of 0.85 (95% confidence interval, 0.77-0.91). The optimum cut-points were <440 m (conventional method) and <502 m (sensitivity-weighted approach). The regression-based lower and upper 6MWT distance cut-points were <427 and >563 m, respectively. Patients walking >563 m in the 6MWT do not routinely require CPET; those walking <427 m should be referred for further evaluation. In situations of 'clinical uncertainty' (≥427 but ≤563 m), the number of clinical risk factors and magnitude of surgery should be incorporated into the decision-making process. The 6MWT is a useful clinical tool to screen and risk stratify patients in departments where CPET is unavailable.
Six-minute walk test in children and adolescents with cystic fibrosis.
Cunha, Maristela Trevisan; Rozov, Tatiana; de Oliveira, Rosangela Caitano; Jardim, José R
2006-07-01
The 6-min walk test is a simple, rapid, and low-cost method that determines tolerance to exercise. We examined the reproducibility of the 6-min walk test in 16 children with cystic fibrosis (11 female, 5 male; age range, 11.0 +/- 1.9 years). We related the distance walked and the work performed (distance walked x body weight) with nutritional (body mass index and respiratory muscle strength) and clinical (degree of bronchial obstruction and Shwachman score) status. Patients were asked to walk as far as possible upon verbal command on two occasions. There was no statistical difference between distances walked (582.3 +/- 60 and 598.2 +/- 56.8 m, P = 0.31), heart rate, respiratory rate, pulse oxygen saturation, arterial blood pressure, dyspnea, and percentage of maximal heart rate for age in the two tests. Distance walked correlated (Pearson) with maximal expiratory pressure (98.6 +/- 28.1 cmH2O, r = 0.60, P < 0.01), maximal heart rate (157.9 +/- 10.1 bpm, r = 0.59, P < 0.02), Borg dyspnea scale (1.7 +/- 2.4, r = 0.55, P < 0.03), and double product (blood pressure x heart rate; r = 0.59, P < 0.02). The product of distance walked and body weight (work) correlated (Pearson) with height (r = 0.83, P = 0.000), maximal expiratory pressure (r = 0.64, P < 0.01), systolic blood pressure (r = 0.56, P < 0.02), and diastolic blood pressure (r = 0.55, P < 0.03). We conclude that the 6-min walk test is reproducible and easy to perform in children and adolescents with cystic fibrosis. The distance walked was related to the clinical variables studied. Work in the 6-min walk test may be an additional parameter in the determination of physical capacity.
Mathematical model in post-mortem estimation of brain edema using morphometric parameters.
Radojevic, Nemanja; Radnic, Bojana; Vucinic, Jelena; Cukic, Dragana; Lazovic, Ranko; Asanin, Bogdan; Savic, Slobodan
2017-01-01
Current autopsy principles for evaluating the existence of brain edema are based on a macroscopic subjective assessment performed by pathologists. The gold standard is a time-consuming histological verification of the presence of the edema. By measuring the diameters of the cranial cavity, as individually determined morphometric parameters, a mathematical model for rapid evaluation of brain edema was created, based on the brain weight measured during the autopsy. A cohort study was performed on 110 subjects, divided into two groups according to the histological presence or absence of (the - deleted from the text) brain edema. In all subjects, the following measures were determined: the volume and the diameters of the cranial cavity (longitudinal and transverse distance and height), the brain volume, and the brain weight. The complex mathematical algorithm revealed a formula for the coefficient ε, which is useful to conclude whether a brain edema is present or not. The average density of non-edematous brain is 0.967 g/ml, while the average density of edematous brain is 1.148 g/ml. The resulting formula for the coefficient ε is (5.79 x longitudinal distance x transverse distance)/brain weight. Coefficient ε can be calculated using measurements of the diameters of the cranial cavity and the brain weight, performed during the autopsy. If the resulting ε is less than 0.9484, it could be stated that there is cerebral edema with a reliability of 98.5%. The method discussed in this paper aims to eliminate the burden of relying on subjective assessments when determining the presence of a brain edema. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Complex networks in the Euclidean space of communicability distances
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2012-06-01
We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.
Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C
2016-08-01
Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimal regionalization of extreme value distributions for flood estimation
NASA Astrophysics Data System (ADS)
Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.
2018-01-01
Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.
Romppanen, T; Huttunen, E; Helminen, H J
1980-07-01
An improved light microscopical histoquantitative method for the analysis of the stereologic structure of the ventral lobe of the rat prostate is introduced. From paraffin-embedded tissue sections, volumetric fractions of the acinar parenchyma, the glandular epithelium, the glandular lumen, and the interacinar tissue were determined. The surface density of the glandular epithelium and the length density of the glandular tubules per cubic millimeter of tissue were also calculated. The corresponding total amount/quantity of each tissue compartment was computed for the whole ventral lobe based on the weight of the lobe. Using established stereologic laws, the height of the epithelium, the diameter of the glandular tubules, the free distance between the glandular tubules, and the distance between the glandular centers (means) were determined. The fitness of the method was tested by analyzing, in addition to normal prostates, ventral prostates of rats castrated 30 days before sacrifice.
Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.
Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan
2018-05-21
This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.
Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.
Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng
2017-12-01
How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Grijs, Richard; Wicker, James E.; Bono, Giuseppe
2014-05-01
The distance to the Large Magellanic Cloud (LMC) represents a key local rung of the extragalactic distance ladder yet the galaxy's distance modulus has long been an issue of contention, in particular in view of claims that most newly determined distance moduli cluster tightly—and with a small spread—around the 'canonical' distance modulus, (m – M){sub 0} = 18.50 mag. We compiled 233 separate LMC distance determinations published between 1990 and 2013. Our analysis of the individual distance moduli, as well as of their two-year means and standard deviations resulting from this largest data set of LMC distance moduli available tomore » date, focuses specifically on Cepheid and RR Lyrae variable-star tracer populations, as well as on distance estimates based on features in the observational Hertzsprung-Russell diagram. We conclude that strong publication bias is unlikely to have been the main driver of the majority of published LMC distance moduli. However, for a given distance tracer, the body of publications leading to the tightly clustered distances is based on highly non-independent tracer samples and analysis methods, hence leading to significant correlations among the LMC distances reported in subsequent articles. Based on a careful, weighted combination, in a statistical sense, of the main stellar population tracers, we recommend that a slightly adjusted canonical distance modulus of (m – M){sub 0} = 18.49 ± 0.09 mag be used for all practical purposes that require a general distance scale without the need for accuracies of better than a few percent.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esgin, U.; Özyürek, D.; Kaya, H., E-mail: hasan.kaya@kocaeli.edu.tr
In the present study, wear behaviors of Monel 400, Monel 404, Monel R-405 and Monel K-500 alloys produced by Powder Metallurgy (P/M) method were investigated. These compounds prepared from elemental powders were cold-pressed (600 MPa) and then, sintered at 1150°C for 2 hours and cooled down to the room temperature in furnace environment. Monel alloys produced by the P/M method were characterized through scanning electron microscope (SEM+EDS), X-ray diffraction (XRD), hardness and density measurements. In wear tests, standard pin-on-disk type device was used. Specimens produced within four different Monel Alloys were tested under 1ms{sup −1} sliding speed, under three different loadsmore » (20N, 30N and 40N) and five different sliding distances (400-2000 m). The results show that Monel Alloys have γ matrix and that Al{sub 0,9}Ni{sub 4,22} intermetallic phase was formed in the structure. Also, the highest hardness value was measured with the Monel K-500 alloy. In wear tests, the maximum weight loss according to the sliding distance, was observed in Monel 400 and Monel 404 alloys while the minimum weight loss was achieved by the Monel K-500 alloy.« less
Code of Federal Regulations, 2010 CFR
2010-04-01
... weight is the weight of all pyrotechnic compositions, and explosive materials and fuse only. 2 The..., the distances must be doubled. 3 While consumer fireworks or articles pyrotechnic in a finished state... pyrotechnic are being processed shall meet these requirements. 4 A maximum of 500 pounds of in-process...
Code of Federal Regulations, 2011 CFR
2011-04-01
... weight is the weight of all pyrotechnic compositions, and explosive materials and fuse only. 2 The..., the distances must be doubled. 3 While consumer fireworks or articles pyrotechnic in a finished state... pyrotechnic are being processed shall meet these requirements. 4 A maximum of 500 pounds of in-process...
An in situ study of the habits of users that affect office chair design and testing.
Benden, Mark E; Fink, Rainer; Congleton, Jerome
2011-02-01
The purpose of this study was to perform an in situ assessment of office seating habits that influence chair testing and design. Many chair testing parameters were derived decades ago when the average weight of people within the United States was dramatically lower and the office work tasks less computer based. For the study, 51 participants were randomly selected from Brazos Valley, Texas, businesses to participate in 8-hr assessments of office seating habits. Overall results were compared with current chair testing and design assumptions. Data were collected through written survey and through data logging of seat and back contact pressure and duration with the use of the X-SENSOR pressure mapping device and software. Additionally, I day per participant of caster roll distance was recorded with the use of a caster mounted digital encoder. Participants were grouped by body mass index (BMI) and weight (BMI <35 and weight < 102 kg or BMI >35 and weight >102 kg). It was determined that a significant difference did exist between the groups in mean seat time per shift (p < .001), back cycles per shift (p < .002), seat cycles per shift (p < .01), and caster distance rolled per shift (p < .001). Several key parameters and assumptions of current chair test methods and design specifications may no longer be valid for the upper quartile of weight range of the current U.S. population. The data collected in this study will enable engineers to determine whether revision of design standards for testing office seating for both normal weight and extremely obese workers is indicated.
Wear Behaviour of Al-6061/SiC Metal Matrix Composites
NASA Astrophysics Data System (ADS)
Mishra, Ashok Kumar; Srivastava, Rajesh Kumar
2017-04-01
Aluminium Al-6061 base composites, reinforced with SiC particles having mesh size of 150 and 600, which is fabricated by stir casting method and their wear resistance and coefficient of friction has been investigated in the present study as a function of applied load and weight fraction of SiC varying from 5, 10, 15, 20, 25, 30, 35 and 40 %. The dry sliding wear properties of composites were investigated by using Pin-on-disk testing machine at sliding velocity of 2 m/s and sliding distance of 2000 m over a various loads of 10, 20 and 30 N. The result shows that the reinforcement of the metal matrix with SiC particulates up to weight percentage of 35 % reduces the wear rate. The result also show that the wear of the test specimens increases with the increasing load and sliding distance. The coefficient of friction slightly decreases with increasing weight percentage of reinforcements. The wear surfaces are examined by optical microscopy which shows that the large grooved regions and cavities with ceramic particles are found on the worn surface of the composite alloy. This indicates an abrasive wear mechanism, which is essentially a result of hard ceramic particles exposed on the worn surfaces. Further, it was found from the experimentation that the wear rate decreases linearly with increasing weight fraction of SiC and average coefficient of friction decreases linearly with increasing applied load, weight fraction of SiC and mesh size of SiC. The best result has been obtained at 35 % weight fraction and 600 mesh size of SiC.
Spatial Access to Primary Care Providers in Appalachia
Donohoe, Joseph; Marshall, Vince; Tan, Xi; Camacho, Fabian T.; Anderson, Roger T.; Balkrishnan, Rajesh
2016-01-01
Purpose: The goal of this research was to examine spatial access to primary care physicians in Appalachia using both traditional access measures and the 2-step floating catchment area (2SFCA) method. Spatial access to care was compared between urban and rural regions of Appalachia. Methods: The study region included Appalachia counties of Pennsylvania, Ohio, Kentucky, and North Carolina. Primary care physicians during 2008 and total census block group populations were geocoded into GIS software. Ratios of county physicians to population, driving time to nearest primary care physician, and various 2SFCA approaches were compared. Results: Urban areas of the study region had shorter travel times to their closest primary care physician. Provider to population ratios produced results that varied widely from one county to another because of strict geographic boundaries. The 2SFCA method produced varied results depending on the distance decay weight and variable catchment size techniques chose. 2SFCA scores showed greater access to care in urban areas of Pennsylvania, Ohio, and North Carolina. Conclusion: The different parameters of the 2SFCA method—distance decay weights and variable catchment sizes—have a large impact on the resulting spatial access to primary care scores. The findings of this study suggest that using a relative 2SFCA approach, the spatial access ratio method, when detailed patient travel data are unavailable. The 2SFCA method shows promise for measuring access to care in Appalachia, but more research on patient travel preferences is needed to inform implementation. PMID:26906524
Modeling an internal gear pump
NASA Astrophysics Data System (ADS)
Chen, Zongbin; Xu, Rongwu; He, Lin; Liao, Jian
2018-05-01
Considering the nature and characteristics of construction waste piles, this paper analyzed the factors affecting the stability of the slope of construction waste piles, and established the system of the assessment indexes for the slope failure risks of construction waste piles. Based on the basic principles and methods of fuzzy mathematics, the factor set and the remark set were established. The membership grade of continuous factor indexes is determined using the "ridge row distribution" function, while that for the discrete factor indexes was determined by the Delphi Method. For the weight of factors, the subjective weight was determined by the Analytic Hierarchy Process (AHP) and objective weight by the entropy weight method. And the distance function was introduced to determine the combination coefficient. This paper established a fuzzy comprehensive assessment model of slope failure risks of construction waste piles, and assessed pile slopes in the two dimensions of hazard and vulnerability. The root mean square of the hazard assessment result and vulnerability assessment result was the final assessment result. The paper then used a certain construction waste pile slope as the example for analysis, assessed the risks of the four stages of a landfill, verified the assessment model and analyzed the slope's failure risks and preventive measures against a slide.
Lyman, Katie J; Keister, Kassiann; Gange, Kara; Mellinger, Christopher D; Hanson, Thomas A
2017-04-01
Limited quantitative, physiological evidence exists regarding the effectiveness of Kinesio® Taping methods, particularly with respect to the potential ability to impact underlying physiological joint space and structures. To better understand the impact of these techniques, the underlying physiological processes must be investigated in addition to the examination of more subjective measures related to pain in unhealthy tissues. The purpose of this study was to determine whether the Kinesio® Taping Space Correction Method created a significant difference in patellofemoral joint space, as quantified by diagnostic ultrasound. Pre-test/post-test prospective cohort study. Thirty-two participants with bilaterally healthy knees and no past history of surgery took part in the study. For each participant, diagnostic ultrasound was utilized to collect three measurements: the patellofemoral joint space, the distance from the skin to the superficial patella, and distance from the skin to the patellar tendon. The Kinesio® Taping Space Correction Method was then applied. After a ten-minute waiting period in a non-weight bearing position, all three measurements were repeated. Each participant served as his or her own control. Paired t tests showed a statistically significant difference (mean difference = 1.1 mm, t [3,1] = 2.823, p = 0.008, g = .465) between baseline and taped conditions in the space between the posterior surface of the patella to the medial femoral condyle. Neither the distance from the skin to the superficial patella nor the distance from the skin to the patellar tendon increased to a statistically significant degree. The application of the Kinesio® Taping Space Correction Method increases the patellofemoral joint space in healthy adults by increasing the distance between the patella and the medial femoral condyle, though it does not increase the distance from the skin to the superficial patella nor to the patellar tendon. 3.
Adaptive density trajectory cluster based on time and space distance
NASA Astrophysics Data System (ADS)
Liu, Fagui; Zhang, Zhijie
2017-10-01
There are some hotspot problems remaining in trajectory cluster for discovering mobile behavior regularity, such as the computation of distance between sub trajectories, the setting of parameter values in cluster algorithm and the uncertainty/boundary problem of data set. As a result, based on the time and space, this paper tries to define the calculation method of distance between sub trajectories. The significance of distance calculation for sub trajectories is to clearly reveal the differences in moving trajectories and to promote the accuracy of cluster algorithm. Besides, a novel adaptive density trajectory cluster algorithm is proposed, in which cluster radius is computed through using the density of data distribution. In addition, cluster centers and number are selected by a certain strategy automatically, and uncertainty/boundary problem of data set is solved by designed weighted rough c-means. Experimental results demonstrate that the proposed algorithm can perform the fuzzy trajectory cluster effectively on the basis of the time and space distance, and obtain the optimal cluster centers and rich cluster results information adaptably for excavating the features of mobile behavior in mobile and sociology network.
Dictionary learning based noisy image super-resolution via distance penalty weight model
Han, Yulan; Zhao, Yongping; Wang, Qisong
2017-01-01
In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness. PMID:28759633
ERIC Educational Resources Information Center
Swan, Malcolm D.; Jones, Orville E.
It is essential in communicative situations for teachers and students to have comparable percepts. A paucity of information is available on the percepts held by children regarding quantities and intervals of distance, height, weight, time, temperature and volume or on improvement (if any) that occurs as children mature. Teachers cannot be…
Martarelli, Corinna S; Borter, Natalie; Bryjova, Jana; Mast, Fred W; Munsch, Simone
2015-11-30
Relatively little is known about the influence of psychosocial factors, such as familial role modeling and social network on the development and maintenance of childhood obesity. We investigated peer selection using an immersive virtual reality environment. In a virtual schoolyard, children were confronted with normal weight and overweight avatars either eating or playing. Fifty-seven children aged 7-13 participated. Interpersonal distance to the avatars, child's BMI, self-perception, eating behavior and parental BMI were assessed. Parental BMI was the strongest predictor for the children's minimal distance to the avatars. Specifically, a higher mothers' BMI was associated with greater interpersonal distance and children approached closer to overweight eating avatars. A higher father's BMI was associated with a lower interpersonal distance to the avatars. These children approached normal weight playing and overweight eating avatar peers closest. The importance of parental BMI for the child's social approach/avoidance behavior can be explained through social modeling mechanisms. Differential effects of paternal and maternal BMI might be due to gender specific beauty ideals. Interventions to promote social interaction with peer groups could foster weight stabilization or weight loss in children. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Jian, Junming; Xiong, Fei; Xia, Wei; Zhang, Rui; Gu, Jinhui; Wu, Xiaodong; Meng, Xiaochun; Gao, Xin
2018-06-01
Segmentation of colorectal tumors is the basis of preoperative prediction, staging, and therapeutic response evaluation. Due to the blurred boundary between lesions and normal colorectal tissue, it is hard to realize accurate segmentation. Routinely manual or semi-manual segmentation methods are extremely tedious, time-consuming, and highly operator-dependent. In the framework of FCNs, a segmentation method for colorectal tumor was presented. Normalization was applied to reduce the differences among images. Borrowing from transfer learning, VGG-16 was employed to extract features from normalized images. We conducted five side-output blocks from the last convolutional layer of each block of VGG-16 along the network, these side-output blocks can deep dive multiscale features, and produced corresponding predictions. Finally, all of the predictions from side-output blocks were fused to determine the final boundaries of the tumors. A quantitative comparison of 2772 colorectal tumor manual segmentation results from T2-weighted magnetic resonance images shows that the average Dice similarity coefficient, positive predictive value, specificity, sensitivity, Hammoude distance, and Hausdorff distance were 83.56, 82.67, 96.75, 87.85%, 0.2694, and 8.20, respectively. The proposed method is superior to U-net in colorectal tumor segmentation (P < 0.05). There is no difference between cross-entropy loss and Dice-based loss in colorectal tumor segmentation (P > 0.05). The results indicate that the introduction of FCNs contributed to accurate segmentation of colorectal tumors. This method has the potential to replace the present time-consuming and nonreproducible manual segmentation method.
Increased vertebral bone mineral in response to reduced exercise in amenorrheic runners.
Lindberg, J S; Powell, M R; Hunt, M M; Ducey, D E; Wade, C E
1987-01-01
Seven female runners found to have exercise-induced amenorrhea and decreased bone mineral were reevaluated after 15 months. During the 15-month period, four runners took supplemental calcium and reduced their weekly running distance by 43%, resulting in an average 5% increase in body weight, increased estradiol levels and eumenorrhea. Bone mineral content increased from 1.003+/-0.097 to 1.070+/-0.089 grams per cm.(2) Three runners continued to have amenorrhea, with no change in running distance or body weight. Estradiol levels remained abnormally low and there was no significant change in the bone mineral content, although all three took supplemental calcium. We found that early osteopenia associated with exercise-induced menstrual dysfunction improved when runners reduced their running distance, gained weight and became eumenorrheic.
Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines
Julio L. Guardado; William T. Sommers
1977-01-01
The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...
Point-based warping with optimized weighting factors of displacement vectors
NASA Astrophysics Data System (ADS)
Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas
2000-06-01
The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.
WHOLE BODY NONRIGID CT-PET REGISTRATION USING WEIGHTED DEMONS.
Suh, J W; Kwon, Oh-K; Scheinost, D; Sinusas, A J; Cline, Gary W; Papademetris, X
2011-03-30
We present a new registration method for whole-body rat computed tomography (CT) image and positron emission tomography (PET) images using a weighted demons algorithm. The CT and PET images are acquired in separate scanners at different times and the inherent differences in the imaging protocols produced significant nonrigid changes between the two acquisitions in addition to heterogeneous image characteristics. In this situation, we utilized both the transmission-PET and the emission-PET images in the deformable registration process emphasizing particular regions of the moving transmission-PET image using the emission-PET image. We validated our results with nine rat image sets using M-Hausdorff distance similarity measure. We demonstrate improved performance compared to standard methods such as Demons and normalized mutual information-based non-rigid FFD registration.
Chen, Hui; Fan, Li; Wu, Wei; Liu, Hong-Bin
2017-09-26
Soil moisture data can reflect valuable information on soil properties, terrain features, and drought condition. The current study compared and assessed the performance of different interpolation methods for estimating soil moisture in an area with complex topography in southwest China. The approaches were inverse distance weighting, multifarious forms of kriging, regularized spline with tension, and thin plate spline. The 5-day soil moisture observed at 167 stations and daily temperature recorded at 33 stations during the period of 2010-2014 were used in the current work. Model performance was tested with accuracy indicators of determination coefficient (R 2 ), mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), and modeling efficiency (ME). The results indicated that inverse distance weighting had the best performance with R 2 , MAPE, RMSE, RRMSE, and ME of 0.32, 14.37, 13.02%, 0.16, and 0.30, respectively. Based on the best method, a spatial database of soil moisture was developed and used to investigate drought condition over the study area. The results showed that the distribution of drought was characterized by evidently regional difference. Besides, drought mainly occurred in August and September in the 5 years and was prone to happening in the western and central parts rather than in the northeastern and southeastern areas.
Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods
NASA Astrophysics Data System (ADS)
Pervez, M.; Henebry, G. M.
2010-12-01
In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.
Performance on the Balance Scale by Two-Year Old Children.
ERIC Educational Resources Information Center
Halford, Graeme S.; Dalton, Cherie
Twenty-two children ranging in age from 2 to 3 years were tested on their abilities to apply weight and distance rules to the balance scale. This study was performed to test the prediction that 2-year-olds would be able to understand either a weight rule or a distance rule, but not be able to integrate the two. The sample group was instructed in…
ERIC Educational Resources Information Center
Gutiérrez-Zornoza, Myriam; Sánchez-López, Mairena; García-Hermoso, Antonio; González-García, Alberto; Chillón, Palma; Martínez-Vizcaíno, Vicente
2015-01-01
Purpose: The aim of this study was to examine (a) whether distance from home to school is a determinant of active commuting to school (ACS), (b) the relationship between distance from home to heavily used facilities (school, green spaces, and sports facilities) and the weight status and cardiometabolic risk categories, and (c) whether ACS has a…
Spectral-clustering approach to Lagrangian vortex detection.
Hadjighasem, Alireza; Karrasch, Daniel; Teramoto, Hiroshi; Haller, George
2016-06-01
One of the ubiquitous features of real-life turbulent flows is the existence and persistence of coherent vortices. Here we show that such coherent vortices can be extracted as clusters of Lagrangian trajectories. We carry out the clustering on a weighted graph, with the weights measuring pairwise distances of fluid trajectories in the extended phase space of positions and time. We then extract coherent vortices from the graph using tools from spectral graph theory. Our method locates all coherent vortices in the flow simultaneously, thereby showing high potential for automated vortex tracking. We illustrate the performance of this technique by identifying coherent Lagrangian vortices in several two- and three-dimensional flows.
Computer-assisted segmentation of white matter lesions in 3D MR images using support vector machine.
Lao, Zhiqiang; Shen, Dinggang; Liu, Dengfeng; Jawad, Abbas F; Melhem, Elias R; Launer, Lenore J; Bryan, R Nick; Davatzikos, Christos
2008-03-01
Brain lesions, especially white matter lesions (WMLs), are associated with cardiac and vascular disease, but also with normal aging. Quantitative analysis of WML in large clinical trials is becoming more and more important. In this article, we present a computer-assisted WML segmentation method, based on local features extracted from multiparametric magnetic resonance imaging (MRI) sequences (ie, T1-weighted, T2-weighted, proton density-weighted, and fluid attenuation inversion recovery MRI scans). A support vector machine classifier is first trained on expert-defined WMLs, and is then used to classify new scans. Postprocessing analysis further reduces false positives by using anatomic knowledge and measures of distance from the training set. Cross-validation on a population of 35 patients from three different imaging sites with WMLs of varying sizes, shapes, and locations tests the robustness and accuracy of the proposed segmentation method, compared with the manual segmentation results from two experienced neuroradiologists.
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Bakhshandeh Amnieh, Hassan; Bahadori, Moein
2012-12-01
Ground vibration, air vibration, fly rock, undesirable displacement and fragmentation are some inevitable side effects of blasting operations that can cause serious damage to the surrounding environment. Peak Particle Velocity (PPV) is the main criterion in the assessment of the amount of damage caused by ground vibration. There are different standards for the determination of the safe level of the PPV. To calculate the permissible amount of the explosive to control the damage to the underground structures of Gotvand Olya dam, use was made of sixteen 3-component (totally 48) records generated from 4 blasts. These operations were recorded in 3 directions (radial, transverse and vertical) by four PG-2002 seismographs having GS-11D 3-component seismometers and the records were analyzed with the help of the DADISP software. To predict the PPV, use was made of the scaled distance and the Simulated Annealing (SA) hybrid methods. Using the scaled distance resulted in a relation for the prediction of the PPV; the precision of the relation was then increased to 0.94 with the help of the SA hybrid method. Relying on the high correlation of this relation and considering a minimum distance of 56.2 m to the center of the blast site and a permissible PPV of 178 mm/s (for a 2-day old concrete), the maximum charge weight per delay came out to be 212 Kg.
NASA Astrophysics Data System (ADS)
Abedi Gheshlaghi, Hassan; Feizizadeh, Bakhtiar
2017-09-01
Landslides in mountainous areas render major damages to residential areas, roads, and farmlands. Hence, one of the basic measures to reduce the possible damage is by identifying landslide-prone areas through landslide mapping by different models and methods. The purpose of conducting this study is to evaluate the efficacy of a combination of two models of the analytical network process (ANP) and fuzzy logic in landslide risk mapping in the Azarshahr Chay basin in northwest Iran. After field investigations and a review of research literature, factors affecting the occurrence of landslides including slope, slope aspect, altitude, lithology, land use, vegetation density, rainfall, distance to fault, distance to roads, distance to rivers, along with a map of the distribution of occurred landslides were prepared in GIS environment. Then, fuzzy logic was used for weighting sub-criteria, and the ANP was applied to weight the criteria. Next, they were integrated based on GIS spatial analysis methods and the landslide risk map was produced. Evaluating the results of this study by using receiver operating characteristic curves shows that the hybrid model designed by areas under the curve 0.815 has good accuracy. Also, according to the prepared map, a total of 23.22% of the area, amounting to 105.38 km2, is in the high and very high-risk class. Results of this research are great of importance for regional planning tasks and the landslide prediction map can be used for spatial planning tasks and for the mitigation of future hazards in the study area.
Baek, Jonggyu; Sanchez-Vaznaugh, Emma V; Sánchez, Brisa N
2016-03-15
It is well known that associations between features of the built environment and health depend on the geographic scale used to construct environmental attributes. In the built environment literature, it has long been argued that geographic scales may vary across study locations. However, this hypothesized variation has not been systematically examined due to a lack of available statistical methods. We propose a hierarchical distributed-lag model (HDLM) for estimating the underlying overall shape of food environment-health associations as a function of distance from locations of interest. This method enables indirect assessment of relevant geographic scales and captures area-level heterogeneity in the magnitudes of associations, along with relevant distances within areas. The proposed model was used to systematically examine area-level variation in the association between availability of convenience stores around schools and children's weights. For this case study, body mass index (weight kg)/height (m)2) z scores (BMIz) for 7th grade children collected via California's 2001-2009 FitnessGram testing program were linked to a commercial database that contained locations of food outlets statewide. Findings suggested that convenience store availability may influence BMIz only in some places and at varying distances from schools. Future research should examine localized environmental or policy differences that may explain the heterogeneity in convenience store-BMIz associations. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models
2016-01-01
Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m. PMID:27959919
Comparing Phylogenetic Trees by Matching Nodes Using the Transfer Distance Between Partitions
Giaro, Krzysztof
2017-01-01
Abstract Ability to quantify dissimilarity of different phylogenetic trees describing the relationship between the same group of taxa is required in various types of phylogenetic studies. For example, such metrics are used to assess the quality of phylogeny construction methods, to define optimization criteria in supertree building algorithms, or to find horizontal gene transfer (HGT) events. Among the set of metrics described so far in the literature, the most commonly used seems to be the Robinson–Foulds distance. In this article, we define a new metric for rooted trees—the Matching Pair (MP) distance. The MP metric uses the concept of the minimum-weight perfect matching in a complete bipartite graph constructed from partitions of all pairs of leaves of the compared phylogenetic trees. We analyze the properties of the MP metric and present computational experiments showing its potential applicability in tasks related to finding the HGT events. PMID:28177699
Comparing Phylogenetic Trees by Matching Nodes Using the Transfer Distance Between Partitions.
Bogdanowicz, Damian; Giaro, Krzysztof
2017-05-01
Ability to quantify dissimilarity of different phylogenetic trees describing the relationship between the same group of taxa is required in various types of phylogenetic studies. For example, such metrics are used to assess the quality of phylogeny construction methods, to define optimization criteria in supertree building algorithms, or to find horizontal gene transfer (HGT) events. Among the set of metrics described so far in the literature, the most commonly used seems to be the Robinson-Foulds distance. In this article, we define a new metric for rooted trees-the Matching Pair (MP) distance. The MP metric uses the concept of the minimum-weight perfect matching in a complete bipartite graph constructed from partitions of all pairs of leaves of the compared phylogenetic trees. We analyze the properties of the MP metric and present computational experiments showing its potential applicability in tasks related to finding the HGT events.
Increased Vertebral Bone Mineral in Response to Reduced Exercise in Amenorrheic Runners
Lindberg, Jill S.; Hunt, Marjorie M.; Wade, Charles E.; Powell, Malcolm R.; Ducey, Diane E.
1987-01-01
Seven female runners found to have exercise-induced amenorrhea and decreased bone mineral were reevaluated after 15 months. During the 15-month period, four runners took supplemental calcium and reduced their weekly running distance by 43%, resulting in an average 5% increase in body weight, increased estradiol levels and eumenorrhea. Bone mineral content increased from 1.003±0.097 to 1.070±0.089 grams per cm.2 Three runners continued to have amenorrhea, with no change in running distance or body weight. Estradiol levels remained abnormally low and there was no significant change in the bone mineral content, although all three took supplemental calcium. We found that early osteopenia associated with exercise-induced menstrual dysfunction improved when runners reduced their running distance, gained weight and became eumenorrheic. ImagesFigure 1. PMID:3825107
Analyzing Impact Area of Osym Offices in Istanbul by Idw Method
NASA Astrophysics Data System (ADS)
Kalkan, Y.; Ozturk, O.; Gülnerman, A. G.; Bilgi, S.
2016-12-01
OSYM is the main institute for organizing the national level large scale exams in Turkey. According to the Ministry of National Education of Turkey data, there are 17.588.958 students in the country. Therefore, OSYM has a significant role for everyone from every level of education. More than 15% of the total students are studying in Istanbul. These students have various exams throughout a year, which brings some procedures for each exam to be applied. OSYM Coordination Offices were founded to meet the demands and procedures of these exams and applicants. There are 9 Coordination Offices in Istanbul. Moreover, OSYM Application Centers were founded as support units to OSYM Coordination Offices. These units are under the high schools. There are 67 OSYM Application Centers in Istanbul. In the study, spatial distribution of OSYM Coordination Offices and OSYM Application Centers in Istanbul have been studied related to the transportation network of each district of Istanbul city. Origin Destination Cost Matrix (ODCM) and Invers Distance Weighting (IDW) Method were used to visualize the distribution of OSYM Coordination Offices and Application Centers accessibilities. ODCM measures the nearest paths along the transportation network from origins to destinations. IDW is one of the several interpolation methods allocating values to unknown points. ODCM Method was used to calculate the distances over the transportation network. The results obtained from ODCM Method were used in IDW Method to interpolate the weightings of the OSYM offices and centers. Accessibility of the OSYM Coordination Offices and Application Centers has been detected according to surrounding transportation network. Spatial distribution of existing offices and application centers were evaluated by districts of Istanbul city in conclusion of the study by the ODCM and IDW Methods.
Büttner, Kathrin; Krieter, Joachim
2018-08-01
The analysis of trade networks as well as the spread of diseases within these systems focuses mainly on pure animal movements between farms. However, additional data included as edge weights can complement the informational content of the network analysis. However, the inclusion of edge weights can also alter the outcome of the network analysis. Thus, the aim of the study was to compare unweighted and weighted network analyses of a pork supply chain in Northern Germany and to evaluate the impact on the centrality parameters. Five different weighted network versions were constructed by adding the following edge weights: number of trade contacts, number of delivered livestock, average number of delivered livestock per trade contact, geographical distance and reciprocal geographical distance. Additionally, two different edge weight standardizations were used. The network observed from 2013 to 2014 contained 678 farms which were connected by 1,018 edges. General network characteristics including shortest path structure (e.g. identical shortest paths, shortest path lengths) as well as centrality parameters for each network version were calculated. Furthermore, the targeted and the random removal of farms were performed in order to evaluate the structural changes in the networks. All network versions and edge weight standardizations revealed the same number of shortest paths (1,935). Between 94.4 to 98.9% of the unweighted network and the weighted network versions were identical. Furthermore, depending on the calculated centrality parameters and the edge weight standardization used, it could be shown that the weighted network versions differed from the unweighted network (e.g. for the centrality parameters based on ingoing trade contacts) or did not differ (e.g. for the centrality parameters based on the outgoing trade contacts) with regard to the Spearman Rank Correlation and the targeted removal of farms. The choice of standardization method as well as the inclusion or exclusion of specific farm types (e.g. abattoirs) can alter the results significantly. These facts have to be considered when centrality parameters are to be used for the implementation of prevention and control strategies in the case of an epidemic. Copyright © 2018 Elsevier B.V. All rights reserved.
Weighted triangulation adjustment
Anderson, Walter L.
1969-01-01
The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.
Measurement and modification of forces between lecithin bilayers.
LeNeveu, D M; Rand, R P
1977-01-01
We probe in two different ways the competing attractive and repulsive forces that create lamellar arrays of the phospholipid lecithin when in equilibrium with pure water. The first probe involves the addition of low molecular weight solutes, glucose and sucrose, to a system where the phospholipid is immersed in a large excess of water. Small solutes can enter the aqueous region between bilayers. Their effect is first to increase and then to decrease the separation between bilayers as sugar concentration increases. We interpret this waxing and waning of the lattice spacing in terms of the successive weakening and strengthening of the attractive van der Waals forces originally responsible for creation of a stable lattice. The second probe is an "osmotic stress method," in which very high molecular weight neutral polymer is added to the pure water phase but is unable to enter the multilayers. The polymer competes for water with the lamellar lattice, and thereby compresses it. From the resulting spacing (determined by X-ray diffraction) and the directly measured osmotic pressure, we find a force vs. distance curve for compressing the lattice (or, equivalently, the free energy of transfer to bulk water of water between bilayers. This method reveals a very strong, exponentially varying "hydration force" with a decay distance of about 2 A. PMID:861359
Instability risk assessment of construction waste pile slope based on fuzzy entropy
NASA Astrophysics Data System (ADS)
Ma, Yong; Xing, Huige; Yang, Mao; Nie, Tingting
2018-05-01
Considering the nature and characteristics of construction waste piles, this paper analyzed the factors affecting the stability of the slope of construction waste piles, and established the system of the assessment indexes for the slope failure risks of construction waste piles. Based on the basic principles and methods of fuzzy mathematics, the factor set and the remark set were established. The membership grade of continuous factor indexes is determined using the "ridge row distribution" function, while that for the discrete factor indexes was determined by the Delphi Method. For the weight of factors, the subjective weight was determined by the Analytic Hierarchy Process (AHP) and objective weight by the entropy weight method. And the distance function was introduced to determine the combination coefficient. This paper established a fuzzy comprehensive assessment model of slope failure risks of construction waste piles, and assessed pile slopes in the two dimensions of hazard and vulnerability. The root mean square of the hazard assessment result and vulnerability assessment result was the final assessment result. The paper then used a certain construction waste pile slope as the example for analysis, assessed the risks of the four stages of a landfill, verified the assessment model and analyzed the slope's failure risks and preventive measures against a slide.
14 CFR 420.70 - Separation distance measurement requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Separation distance measurement requirements. 420.70 Section 420.70 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... energetic liquids or net explosive weight that requires the greater distance. [Docket No. FAA-2011-0105, 77...
14 CFR 420.70 - Separation distance measurement requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Separation distance measurement requirements. 420.70 Section 420.70 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... energetic liquids or net explosive weight that requires the greater distance. [Docket No. FAA-2011-0105, 77...
Lin, Tao; Sun, Huijun; Chen, Zhong; You, Rongyi; Zhong, Jianhui
2007-12-01
Diffusion weighting in MRI is commonly achieved with the pulsed-gradient spin-echo (PGSE) method. When combined with spin-warping image formation, this method often results in ghosts due to the sample's macroscopic motion. It has been shown experimentally (Kennedy and Zhong, MRM 2004;52:1-6) that these motion artifacts can be effectively eliminated by the distant dipolar field (DDF) method, which relies on the refocusing of spatially modulated transverse magnetization by the DDF within the sample itself. In this report, diffusion-weighted images (DWIs) using both DDF and PGSE methods in the presence of macroscopic sample motion were simulated. Numerical simulation results quantify the dependence of signals in DWI on several key motion parameters and demonstrate that the DDF DWIs are much less sensitive to macroscopic sample motion than the traditional PGSE DWIs. The results also show that the dipolar correlation distance (d(c)) can alter contrast in DDF DWIs. The simulated results are in good agreement with the experimental results reported previously.
NASA Astrophysics Data System (ADS)
Khaidir Noor, Muhammad
2018-03-01
Reserve estimation is one of important work in evaluating a mining project. It is estimation of the quality and quantity of the presence of minerals have economic value. Reserve calculation method plays an important role in determining the efficiency in commercial exploration of a deposit. This study was intended to calculate ore reserves contained in the study area especially Pit Block 3A. Nickel ore reserve was estimated by using detailed exploration data, processing by using Surpac 6.2 by Inverse Distance Weight: Squared Power estimation method. Ore estimation result obtained from 30 drilling data was 76453.5 ton of Saprolite with density of 1.5 ton/m3 and COG (Cut Off Grade) Ni ≥ 1.6 %, while overburden data was 112,570.8 tons with waste rock density of 1.2 ton/m3 . Striping Ratio (SR) was 1.47 : 1 smaller than Stripping Ratio ( SR ) were set of 1.60 : 1.
Patent data mining method and apparatus
Boyack, Kevin W.; Grafe, V. Gerald; Johnson, David K.; Wylie, Brian N.
2002-01-01
A method of data mining represents related patents in a multidimensional space. Distance between patents in the multidimensional space corresponds to the extent of relationship between the patents. The relationship between pairings of patents can be expressed based on weighted combinations of several predicates. The user can select portions of the space to perceive. The user also can interact with and control the communication of the space, focusing attention on aspects of the space of most interest. The multidimensional spatial representation allows more ready comprehension of the structure of the relationships among the patents.
Wognum, S; Bondar, L; Zolnay, A G; Chai, X; Hulshof, M C C M; Hoogeman, M S; Bel, A
2013-02-01
Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight parameters were determined for the weighted S-TPS-RPM. The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.
Parsimonious description for predicting high-dimensional dynamics
Hirata, Yoshito; Takeuchi, Tomoya; Horai, Shunsuke; Suzuki, Hideyuki; Aihara, Kazuyuki
2015-01-01
When we observe a system, we often cannot observe all its variables and may have some of its limited measurements. Under such a circumstance, delay coordinates, vectors made of successive measurements, are useful to reconstruct the states of the whole system. Although the method of delay coordinates is theoretically supported for high-dimensional dynamical systems, practically there is a limitation because the calculation for higher-dimensional delay coordinates becomes more expensive. Here, we propose a parsimonious description of virtually infinite-dimensional delay coordinates by evaluating their distances with exponentially decaying weights. This description enables us to predict the future values of the measurements faster because we can reuse the calculated distances, and more accurately because the description naturally reduces the bias of the classical delay coordinates toward the stable directions. We demonstrate the proposed method with toy models of the atmosphere and real datasets related to renewable energy. PMID:26510518
Geographically weighted regression based methods for merging satellite and gauge precipitation
NASA Astrophysics Data System (ADS)
Chao, Lijun; Zhang, Ke; Li, Zhijia; Zhu, Yuelong; Wang, Jingfeng; Yu, Zhongbo
2018-03-01
Real-time precipitation data with high spatiotemporal resolutions are crucial for accurate hydrological forecasting. To improve the spatial resolution and quality of satellite precipitation, a three-step satellite and gauge precipitation merging method was formulated in this study: (1) bilinear interpolation is first applied to downscale coarser satellite precipitation to a finer resolution (PS); (2) the (mixed) geographically weighted regression methods coupled with a weighting function are then used to estimate biases of PS as functions of gauge observations (PO) and PS; and (3) biases of PS are finally corrected to produce a merged precipitation product. Based on the above framework, eight algorithms, a combination of two geographically weighted regression methods and four weighting functions, are developed to merge CMORPH (CPC MORPHing technique) precipitation with station observations on a daily scale in the Ziwuhe Basin of China. The geographical variables (elevation, slope, aspect, surface roughness, and distance to the coastline) and a meteorological variable (wind speed) were used for merging precipitation to avoid the artificial spatial autocorrelation resulting from traditional interpolation methods. The results show that the combination of the MGWR and BI-square function (MGWR-BI) has the best performance (R = 0.863 and RMSE = 7.273 mm/day) among the eight algorithms. The MGWR-BI algorithm was then applied to produce hourly merged precipitation product. Compared to the original CMORPH product (R = 0.208 and RMSE = 1.208 mm/hr), the quality of the merged data is significantly higher (R = 0.724 and RMSE = 0.706 mm/hr). The developed merging method not only improves the spatial resolution and quality of the satellite product but also is easy to implement, which is valuable for hydrological modeling and other applications.
Apparent migration of implantable port devices: normal variations in consideration of BMI.
Wyschkon, Sebastian; Löschmann, Jan-Phillip; Scheurig-Münkler, Christian; Nagel, Sebastian; Hamm, Bernd; Elgeti, Thomas
2016-01-01
To evaluate the extent of normal variation in implantable port devices between supine fluoroscopy and upright chest x-ray in relation to body mass index (BMI) based on three different measurement methods. Retrospectively, 80 patients with implanted central venous access port systems from 2012-01-01 until 2013-12-31 were analyzed. Three parameters (two quantitative and one semi-quantitative) were determined to assess port positions: projection of port capsule to anterior ribs (PCP) and intercostal spaces, ratio of extra- and intravascular catheter portions (EX/IV), normalized distance of catheter tip to carina (nCTCD). Changes were analyzed for males and females and normal-weight and overweight patients using analysis of variance with Bonferroni-corrected pairwise comparison. PCP revealed significantly greater changes in chest x-rays in overweight women than in the other groups (p<0.001, F-test). EX/IV showed a significantly higher increase in overweight women than normal-weight women and men and overweight men (p<0.001). nCTCD showed a significantly greater increase in overweight women than overweight men (p = 0.0130). There were no significant differences between the other groups. Inter- and intra-observer reproducibility was high (Cronbach alpha of 0.923-1.0) and best for EX/IV. Central venous port systems show wide normal variations in the projection of catheter tip and port capsule. In overweight women apparent catheter migration is significantly greater compared with normal-weight women and with men. The measurement of EX/IV and PCP are straightforward methods, quick to perform, and show higher reproducibility than measurement of catheter tip-to-carina distance.
Li, Qingbo; Hao, Can; Kang, Xue; Zhang, Jialin; Sun, Xuejun; Wang, Wenbo; Zeng, Haishan
2017-11-27
Combining Fourier transform infrared spectroscopy (FTIR) with endoscopy, it is expected that noninvasive, rapid detection of colorectal cancer can be performed in vivo in the future. In this study, Fourier transform infrared spectra were collected from 88 endoscopic biopsy colorectal tissue samples (41 colitis and 47 cancers). A new method, viz., entropy weight local-hyperplane k-nearest-neighbor (EWHK), which is an improved version of K-local hyperplane distance nearest-neighbor (HKNN), is proposed for tissue classification. In order to avoid limiting high dimensions and small values of the nearest neighbor, the new EWHK method calculates feature weights based on information entropy. The average results of the random classification showed that the EWHK classifier for differentiating cancer from colitis samples produced a sensitivity of 81.38% and a specificity of 92.69%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wognum, S.; Chai, X.; Hulshof, M. C. C. M.
2013-02-15
Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumormore » and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.« less
Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory
Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong
2016-01-01
Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611
Law of Strata Pressure Behavior in Shallow Coal Seam
NASA Astrophysics Data System (ADS)
Zhao, Jian; Liu, Leibin; Zheng, Zhiyang
2018-02-01
The law of strata pressure behavior in shallow coal seam is analyzed, according to the load data of Jinjie Coal Mine 31109 working face hydraulic supports. The first weighting distance of main roof is 80 m, and the periodic weighting distance of main roof is about 20 m. And according to the load data in the middle and both ends of the working face, the working resistance of hydraulic supports and the setting load are a bit small, so they couldn’t meet the needs of supporting roof. Then, the front abutment pressure of working face is analyzed by numerical simulation. It does not only explain the reason that the load is too big, but also explains the reason that the strata pressure behavior in shallow coal seam is serious. The length of undamaged main roof rock beam verifies the correctness of the periodic weighting distance.
Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan
2018-04-01
Association rule mining is an important technique for identifying interesting relationships between gene pairs in a biological data set. Earlier methods basically work for a single biological data set, and, in maximum cases, a single minimum support cutoff can be applied globally, i.e., across all genesets/itemsets. To overcome this limitation, in this paper, we propose dynamic threshold-based FP-growth rule mining algorithm that integrates gene expression, methylation and protein-protein interaction profiles based on weighted shortest distance to find the novel associations among different pairs of genes in multi-view data sets. For this purpose, we introduce three new thresholds, namely, Distance-based Variable/Dynamic Supports (DVS), Distance-based Variable Confidences (DVC), and Distance-based Variable Lifts (DVL) for each rule by integrating co-expression, co-methylation, and protein-protein interactions existed in the multi-omics data set. We develop the proposed algorithm utilizing these three novel multiple threshold measures. In the proposed algorithm, the values of , , and are computed for each rule separately, and subsequently it is verified whether the support, confidence, and lift of each evolved rule are greater than or equal to the corresponding individual , , and values, respectively, or not. If all these three conditions for a rule are found to be true, the rule is treated as a resultant rule. One of the major advantages of the proposed method compared with other related state-of-the-art methods is that it considers both the quantitative and interactive significance among all pairwise genes belonging to each rule. Moreover, the proposed method generates fewer rules, takes less running time, and provides greater biological significance for the resultant top-ranking rules compared to previous methods.
Topological Distances Between Brain Networks
Lee, Hyekyoung; Solo, Victor; Davidson, Richard J.; Pollak, Seth D.
2018-01-01
Many existing brain network distances are based on matrix norms. The element-wise differences may fail to capture underlying topological differences. Further, matrix norms are sensitive to outliers. A few extreme edge weights may severely affect the distance. Thus it is necessary to develop network distances that recognize topology. In this paper, we introduce Gromov-Hausdorff (GH) and Kolmogorov-Smirnov (KS) distances. GH-distance is often used in persistent homology based brain network models. The superior performance of KS-distance is contrasted against matrix norms and GH-distance in random network simulations with the ground truths. The KS-distance is then applied in characterizing the multimodal MRI and DTI study of maltreated children.
Sharkey, Joseph R; Horel, Scott; Han, Daikwon; Huber, John C
2009-01-01
Objective To determine the extent to which neighborhood needs (socioeconomic deprivation and vehicle availability) are associated with two criteria of food environment access: 1) distance to the nearest food store and fast food restaurant and 2) coverage (number) of food stores and fast food restaurants within a specified network distance of neighborhood areas of colonias, using ground-truthed methods. Methods Data included locational points for 315 food stores and 204 fast food restaurants, and neighborhood characteristics from the 2000 U.S. Census for the 197 census block group (CBG) study area. Neighborhood deprivation and vehicle availability were calculated for each CBG. Minimum distance was determined by calculating network distance from the population-weighted center of each CBG to the nearest supercenter, supermarket, grocery, convenience store, dollar store, mass merchandiser, and fast food restaurant. Coverage was determined by calculating the number of each type of food store and fast food restaurant within a network distance of 1, 3, and 5 miles of each population-weighted CBG center. Neighborhood need and access were examined using Spearman ranked correlations, spatial autocorrelation, and multivariate regression models that adjusted for population density. Results Overall, neighborhoods had best access to convenience stores, fast food restaurants, and dollar stores. After adjusting for population density, residents in neighborhoods with increased deprivation had to travel a significantly greater distance to the nearest supercenter or supermarket, grocery store, mass merchandiser, dollar store, and pharmacy for food items. The results were quite different for association of need with the number of stores within 1 mile. Deprivation was only associated with fast food restaurants; greater deprivation was associated with fewer fast food restaurants within 1 mile. CBG with greater lack of vehicle availability had slightly better access to more supercenters or supermarkets, grocery stores, or fast food restaurants. Increasing deprivation was associated with decreasing numbers of grocery stores, mass merchandisers, dollar stores, and fast food restaurants within 3 miles. Conclusion It is important to understand not only the distance that people must travel to the nearest store to make a purchase, but also how many shopping opportunities they have in order to compare price, quality, and selection. Future research should examine how spatial access to the food environment influences the utilization of food stores and fast food restaurants, and the strategies used by low-income families to obtain food for the household. PMID:19220879
Socioeconomic Position and Low Birth Weight among Mothers Exposed to Traffic-Related Air Pollution
Habermann, Mateus; Gouveia, Nelson
2014-01-01
Background Atmospheric pollution is a major public health concern. It can affect placental function and restricts fetal growth. However, scientific knowledge remains too limited to make inferences regarding causal associations between maternal exposure to air pollution and adverse effects on pregnancy. This study evaluated the association between low birth weight (LBW) and maternal exposure during pregnancy to traffic related air pollutants (TRAP) in São Paulo, Brazil. Methods and findings Analysis included 5,772 cases of term-LBW (<2,500 g) and 5,814 controls matched by sex and month of birth selected from the birth registration system. Mothers’ addresses were geocoded to estimate exposure according to 3 indicators: distance from home to heavy traffic roads, distance-weighted traffic density (DWTD) and levels of particulate matter ≤10 µg/m3 estimated through land use regression (LUR-PM10). Final models were evaluated using multiple logistic regression adjusting for birth, maternal and pregnancy characteristics. We found decreased odds in the risk of LBW associated with DWTD and LUR-PM10 in the highest quartiles of exposure with a significant linear trend of decrease in risk. The analysis with distance from heavy traffic roads was less consistent. It was also observed that mothers with higher education and neighborhood-level income were potentially more exposed to TRAP. Conclusions This study found an unexpected decreased risk of LBW associated with traffic related air pollution. Mothers with advantaged socioeconomic position (SEP) although residing in areas of higher vehicular traffic might not in fact be more expose to air pollution. It can also be that the protection against LBW arising from a better SEP is stronger than the effect of exposure to air pollution, and this exposure may not be sufficient to increase the risk of LBW for these mothers. PMID:25426640
Blood pressure and arterial stiffness in obese children and adolescents.
Hvidt, Kristian Nebelin
2015-03-01
Obesity, elevated blood pressure (BP) and arterial stiffness are risk factors for cardiovascular disease. A strong relationship exists between obesity and elevated BP in both children and adults. Obesity and elevated BP in childhood track into adult life increasing the risk of cardiovascular disease in adulthood. Ambulatory BP is the most precise measure to evaluate the BP burden, whereas carotid-femoral pulse wave velocity (cfPWV) is regarded as the gold standard for evaluating arterial (i.e. aortic) stiffness. These measures might contribute to a better understanding of obesity's adverse impact on the cardiovascular system, and ultimately a better prevention and treatment of childhood obesity. The overall aim of the present PhD thesis is to investigate arterial stiffness and 24-hour BP in obese children and adolescents, and evaluate whether these measures are influenced by weight reduction. The present PhD thesis is based on four scientific papers. In a cross-sectional design, 104 severe obese children and adolescents with an age of 10-18 years were recruited when newly referred to the Children's Obesity Clinic, Holbæk University Hospital, and compared to 50 normal weighted age and gender matched control individuals. Ambulatory BP was measured, and cfPWV was investigated in two ways in respect to the distance measure of aorta; the previously recommended length - the so called subtracted distance, and the currently recommended length - the direct distance. In a longitudinal design, the obese patients were re-investigated after one-year of lifestyle intervention at the Children's Obesity Clinic in purpose of reducing the degree of obesity. In the cross-sectional design, the obese group had higher measures of obesity, while matched for age, gender and height, when compared to the control group. In the longitudinal design, 74% of the 72 followed up obese patients experienced a significant weight reduction. CfPWV was dependent on the method used to measure the length of the aorta. The subtracted distance was not consistent in its relation to height in the obese and the control group. Opposite, the direct distance was consistent in its relation to height in the two groups. Therefore, cfPWV using the direct distance (cfPWV-direct) was regarded as the appropriate measure of arterial stiffness. CfPWV-direct was reduced in the obese group after adjustment for known confounders. In the longitudinal design, weight reduction across one year did not have an impact on cfPWV-direct in the obese patients. In fact, cfPWV-direct was higher at follow-up, which was explained by the increased age and partly by changes in BP and heart rate. The obese group had a relatively higher night- than day-time BP when compared to the control group. The obesity-related elevated night-time BP was independent of arterial stiffness and insulin resistance. Although night-time systolic BP was related to arterial stiffness and tended to be related to insulin resistance, insulin resistance and arterial stiffness were not related. In the longitudinal design, changes in anthropometric obesity measures across one year were associated with changes in 24-hour, day- and night-time BP, and consistent when evaluated in standardised values that accounted for growth. No association was found between changes in anthropometric obesity measures and changes in clinic BP. In conclusion, the results suggest that obesity in children is not "yet" associated with structural changes in aorta when evaluated with the appropriate new method of cfPWV. In this respect, weight reduction did not have an impact on arterial stiffness. The ambulatory BP, namely the night-time BP, was elevated in the obese patients, whereas changes in anthropometric obesity measures were related to changes in ambulatory BP but not to changes in clinic BP. In perspective, it is reassuring that weight changes are accompanied with a change in 24-hour BP as ambulatory BP is the most precise measure to evaluate the BP burden, and it emphasises the use of 24-hour ambulatory BP measurements in children and adolescents. It is important to recognise, that obese children who recover their normal weight before adulthood will have a similar cardiovascular risk as those who were never obese. Hence, early treatment and prevention of childhood obesity is important because it may prevent irreversible damage to the cardiovascular system.
Content Based Image Retrieval based on Wavelet Transform coefficients distribution
Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice
2007-01-01
In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013
Comparing ordinary kriging and inverse distance weighting for soil as pollution in Beijing.
Qiao, Pengwei; Lei, Mei; Yang, Sucai; Yang, Jun; Guo, Guanghui; Zhou, Xiaoyong
2018-06-01
Spatial interpolation method is the basis of soil heavy metal pollution assessment and remediation. The existing evaluation index for interpolation accuracy did not combine with actual situation. The selection of interpolation methods needs to be based on specific research purposes and research object characteristics. In this paper, As pollution in soils of Beijing was taken as an example. The prediction accuracy of ordinary kriging (OK) and inverse distance weighted (IDW) were evaluated based on the cross validation results and spatial distribution characteristics of influencing factors. The results showed that, under the condition of specific spatial correlation, the cross validation results of OK and IDW for every soil point and the prediction accuracy of spatial distribution trend are similar. But the prediction accuracy of OK for the maximum and minimum is less than IDW, while the number of high pollution areas identified by OK are less than IDW. It is difficult to identify the high pollution areas fully by OK, which shows that the smoothing effect of OK is obvious. In addition, with increasing of the spatial correlation of As concentration, the cross validation error of OK and IDW decreases, and the high pollution area identified by OK is approaching the result of IDW, which can identify the high pollution areas more comprehensively. However, because the semivariogram constructed by OK interpolation method is more subjective and requires larger number of soil samples, IDW is more suitable for spatial prediction of heavy metal pollution in soils.
Applying the Weighted Horizontal Magnetic Gradient Method to a Simulated Flaring Active Region
NASA Astrophysics Data System (ADS)
Korsós, M. B.; Chatterjee, P.; Erdélyi, R.
2018-04-01
Here, we test the weighted horizontal magnetic gradient (WG M ) as a flare precursor, introduced by Korsós et al., by applying it to a magnetohydrodynamic (MHD) simulation of solar-like flares. The preflare evolution of the WG M and the behavior of the distance parameter between the area-weighted barycenters of opposite-polarity sunspots at various heights is investigated in the simulated δ-type sunspot. Four flares emanated from this sunspot. We found the optimum heights above the photosphere where the flare precursors of the WG M method are identifiable prior to each flare. These optimum heights agree reasonably well with the heights of the occurrence of flares identified from the analysis of their thermal and ohmic heating signatures in the simulation. We also estimated the expected time of the flare onsets from the duration of the approaching–receding motion of the barycenters of opposite polarities before each single flare. The estimated onset time and the actual time of occurrence of each flare are in good agreement at the corresponding optimum heights. This numerical experiment further supports the use of flare precursors based on the WG M method.
Delay Times From Clustered Multi-Channel Cross Correlation and Simulated Annealing
NASA Astrophysics Data System (ADS)
Creager, K. C.; Sambridge, M. S.
2004-12-01
Several techniques exist to estimate relative delay times of seismic phases based on the assumption that the waveforms observed at several stations can be expressed as a common waveform that has been time shifted and distorted by random uncorrelated noise. We explore the more general problem of estimating the relative delay times for regional or even global distributions of seismometers in cases where waveforms vary systematically across the array. The estimation of relative delay times is formulated as a global optimization of the weighted sum of squares of cross correlations of each seismogram pair evaluated at the corresponding difference in their relative delay times. As there are many local minima in this penalty function, a simulated annealing algorithm is used to obtain a solution. The weights depend strongly on the separation distance among seismogram pairs as well as a measure of the similarity of waveforms. Thus, seismograph pairs that are physically close to each other and have similar waveforms are expected to be well aligned while those with dissimilar waveforms or large separation distances are severely down-weighted and thus need not be well aligned. As a result noisy seismograms, which are not similar to other seismograms, are down-weighted so they do not adversely effect the relative delay times of other seismograms. Finally, natural clusters of seismograms are determined from the weight matrix. Examples of aligning a few hundred P and PKP waveforms from a broadband global array and from a mixed broadband and short-period continental-scale array will be shown. While this method has applications in many situations, it may be especially useful for arrays such as the EarthScope Bigfoot Array.
Maintaining vigorous activity attenuates 7-yr weight gain in 8340 runners.
Williams, Paul T
2007-05-01
Body weight generally increases with aging in Western societies. Although training studies show that exercise produces acute weight loss, it is unclear whether the long-term maintenance of vigorous exercise attenuates the trajectory of age-related weight gain. Specifically, prior studies have not tested whether the maintenance of physical activity, in the absence of any change in activity, prevents weight gain. Prospective study of 6119 male and 2221 female runners whose running distances changed < 5 km x wk(-1) between baseline and follow-up surveys 7 yr later. On average, men who maintained modest (0-23 km x wk(-1)), intermediate (24-47 km x wk(-1)), or prolonged running distances (> or = 48 km x wk(-1)) all gained weight through age 64; however, those who maintained > or = 48 km x wk(-1) had one half the average annual weight gain of those who maintained < 24 km x wk(-1). For example, between the ages of 35 and 44 in men and 30 and 39 yr in women, those who maintained < 24 km x wk(-1) gained, on average, 2.1 and 2.9 kg more per decade than those averaging > 48 km x wk(-1). Age-related weight gain, and its attenuation by maintained exercise, were both greater in younger than in older men. Men's gains in waist circumference with age, and its attenuation by maintaining running, were the same in older and younger men. Regardless of age, women increased their body weight, waist circumference, and hip circumference over time, and these measurements were attenuated in proportion to their maintained running distance. In both sexes, running disproportionately prevented more extreme increases in weight. As they aged, men and women gained less weight in proportion to their levels of sustained vigorous activity. This long-term beneficial effect is in addition to the acute weight loss that occurs with increased activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buta, R.; de Vaucouleurs, G.
The diameters d/sub r/ of inner ring structures in disk galaxies are used as geometric distance indicators to derive the distances of 453 spiral and lenticular galaxies, mainly in the distance interval 4<..delta..<63 Mpc. The diameters are weighted means from the catalogs to Kormendy, Pedreros and Madore, and the authors. The distances are calculated by means of the two- and three-parameter formulae of Paper II; the adopted mean distance moduli ..mu../sub 0/(r) have mean errors from all sources of 0.6--0.7 mag for the well-observed galaxies.
Genetic divergence in the common bean (Phaseolus vulgaris L.) in the Cerrado-Pantanal ecotone.
da Silva, F A; Corrêa, A M; Teodoro, P E; Lopes, K V; Corrêa, C C G
2017-03-30
Evaluating genetic diversity among genotypes is important for providing parameters for the identification of superior genotypes, because the choice of parents that form segregating populations is crucial. Our objectives were to i) evaluate agronomic performance; ii) compare clustering methods; iii) ascertain the relative contributions of the variables evaluated; and iv) identify the most promising hybrids to produce superior segregating populations. The trial was conducted in 2015 at the State University of Mato Grosso do Sul, Brazil. We used a randomized block design with three replications, and recorded the days to emergence, days to flowering, days to maturity, plant height, number of branches, number of pods, number of seeds per pod, weight of 100 grains, and productivity. The genetic diversity of the genotypes was determined by cluster analysis using two dissimilarity measures: the Euclidean distance and the standardized mean Mahalanobis distance using the Ward hierarchical method. The genotypes 'CNFC 10762', 'IAC Dawn', and 'BRS Style' had the highest grain yields, and clusters that were based on the Euclidean distance differed from those based on the Mahalanobis distance, the second being more precise. The yield grain character has greater relevance to the dispute. Hybrids with a high heterotic effect can be obtained by crossing 'IAC Alvorada' with 'CNFC 10762', 'IAC Alvorada' with 'CNFC 10764', and 'BRS Style' with 'IAC Alvorada'.
Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.
2014-06-17
A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.
A multi-criteria spatial deprivation index to support health inequality analyses.
Cabrera-Barona, Pablo; Murphy, Thomas; Kienberger, Stefan; Blaschke, Thomas
2015-03-20
Deprivation indices are useful measures to analyze health inequalities. There are several methods to construct these indices, however, few studies have used Geographic Information Systems (GIS) and Multi-Criteria methods to construct a deprivation index. Therefore, this study applies Multi-Criteria Evaluation to calculate weights for the indicators that make up the deprivation index and a GIS-based fuzzy approach to create different scenarios of this index is also implemented. The Analytical Hierarchy Process (AHP) is used to obtain the weights for the indicators of the index. The Ordered Weighted Averaging (OWA) method using linguistic quantifiers is applied in order to create different deprivation scenarios. Geographically Weighted Regression (GWR) and a Moran's I analysis are employed to explore spatial relationships between the different deprivation measures and two health factors: the distance to health services and the percentage of people that have never had a live birth. This last indicator was considered as the dependent variable in the GWR. The case study is Quito City, in Ecuador. The AHP-based deprivation index show medium and high levels of deprivation (0,511 to 1,000) in specific zones of the study area, even though most of the study area has low values of deprivation. OWA results show deprivation scenarios that can be evaluated considering the different attitudes of decision makers. GWR results indicate that the deprivation index and its OWA scenarios can be considered as local estimators for health related phenomena. Moran's I calculations demonstrate that several deprivation scenarios, in combination with the 'distance to health services' factor, could be explanatory variables to predict the percentage of people that have never had a live birth. The AHP-based deprivation index and the OWA deprivation scenarios developed in this study are Multi-Criteria instruments that can support the identification of highly deprived zones and can support health inequalities analysis in combination with different health factors. The methodology described in this study can be applied in other regions of the world to develop spatial deprivation indices based on Multi-Criteria analysis.
Unifying cost and information in information-theoretic competitive learning.
Kamimura, Ryotaro
2005-01-01
In this paper, we introduce costs into the framework of information maximization and try to maximize the ratio of information to its associated cost. We have shown that competitive learning is realized by maximizing mutual information between input patterns and competitive units. One shortcoming of the method is that maximizing information does not necessarily produce representations faithful to input patterns. Information maximizing primarily focuses on some parts of input patterns that are used to distinguish between patterns. Therefore, we introduce the cost, which represents average distance between input patterns and connection weights. By minimizing the cost, final connection weights reflect input patterns well. We applied the method to a political data analysis, a voting attitude problem and a Wisconsin cancer problem. Experimental results confirmed that, when the cost was introduced, representations faithful to input patterns were obtained. In addition, improved generalization performance was obtained within a relatively short learning time.
Evidence conflict measure based on OWA operator in open world
Wang, Shiyu; Liu, Xiang; Zheng, Hanqing; Wei, Boya
2017-01-01
Dempster-Shafer evidence theory has been extensively used in many information fusion systems since it was proposed by Dempster and extended by Shafer. Many scholars have been conducted on conflict management of Dempster-Shafer evidence theory in past decades. However, how to determine a potent parameter to measure evidence conflict, when the given environment is in an open world, namely the frame of discernment is incomplete, is still an open issue. In this paper, a new method which combines generalized conflict coefficient, generalized evidence distance, and generalized interval correlation coefficient based on ordered weighted averaging (OWA) operator, to measure the conflict of evidence is presented. Through ordered weighted average of these three parameters, the combinatorial coefficient can still measure the conflict effectively when one or two parameters are not valid. Several numerical examples demonstrate the effectiveness of the proposed method. PMID:28542271
de Assis Pereira Cacau, Lucas; Carvalho, Vitor Oliveira; Dos Santos Pin, Alessandro; Araujo Daniel, Carlos Raphael; Ykeda, Daisy Satomi; de Carvalho, Eliane Maria; Francica, Juliana Valente; Faria, Luíza Martins; Gomes-Neto, Mansueto; Fernandes, Marcelo; Velloso, Marcelo; Karsten, Marlus; de Sá Barros, Patrícia; de Santana-Filho, Valter Joviniano
2018-03-01
Brazil is a country with great climatic, socioeconomic, and cultural differences that does not yet have a reference value for the 6-min walk test (6MWT) in healthy children. To avoid misinterpretation, the use of equations to predict the maximum walk distance should be established in each country. We sought to establish reference values and to develop an equation to predict the 6-min walk distance for healthy children in Brazil. This is a cross-sectional multi-center study that included 1,496 healthy children, aged 7 to 12 y, assessed across 11 research sites in all regions of Brazil, and recruited from public and private schools in their respective regions. Each child was assessed for weight and height. Walk distance was our main outcome. An open-source software environment for statistical computing was used for statistical analysis. We observed a higher average distance walked by boys (531.1 m) than by girls (506.2 m), with a difference of 24.9 m ( P < .001). We established 6MWT reference values for boys with the following equation: Distance = (16.86 × age) + (1.89 × Δ heart rate) - (0.80 × weight) + (336.91 × R1) + (360.91 × R2). For girls the equation is as follows: Distance = (13.54 × age) + (1.62 × Δ heart rate) - (1.28 × weight) + (352.33 × R1) + (394.81 × R2). Reference values were established for the 6MWT in healthy children aged 7-12 y in Brazil. Copyright © 2018 by Daedalus Enterprises.
Hallisey, Elaine; Tai, Eric; Berens, Andrew; Wilt, Grete; Peipins, Lucy; Lewis, Brian; Graham, Shannon; Flanagan, Barry; Lunsford, Natasha Buchanan
2017-08-07
Transforming spatial data from one scale to another is a challenge in geographic analysis. As part of a larger, primary study to determine a possible association between travel barriers to pediatric cancer facilities and adolescent cancer mortality across the United States, we examined methods to estimate mortality within zones at varying distances from these facilities: (1) geographic centroid assignment, (2) population-weighted centroid assignment, (3) simple areal weighting, (4) combined population and areal weighting, and (5) geostatistical areal interpolation. For the primary study, we used county mortality counts from the National Center for Health Statistics (NCHS) and population data by census tract for the United States to estimate zone mortality. In this paper, to evaluate the five mortality estimation methods, we employed address-level mortality data from the state of Georgia in conjunction with census data. Our objective here is to identify the simplest method that returns accurate mortality estimates. The distribution of Georgia county adolescent cancer mortality counts mirrors the Poisson distribution of the NCHS counts for the U.S. Likewise, zone value patterns, along with the error measures of hierarchy and fit, are similar for the state and the nation. Therefore, Georgia data are suitable for methods testing. The mean absolute value arithmetic differences between the observed counts for Georgia and the five methods were 5.50, 5.00, 4.17, 2.74, and 3.43, respectively. Comparing the methods through paired t-tests of absolute value arithmetic differences showed no statistical difference among the methods. However, we found a strong positive correlation (r = 0.63) between estimated Georgia mortality rates and combined weighting rates at zone level. Most importantly, Bland-Altman plots indicated acceptable agreement between paired arithmetic differences of Georgia rates and combined population and areal weighting rates. This research contributes to the literature on areal interpolation, demonstrating that combined population and areal weighting, compared to other tested methods, returns the most accurate estimates of mortality in transforming small counts by county to aggregated counts for large, non-standard study zones. This conceptually simple cartographic method should be of interest to public health practitioners and researchers limited to analysis of data for relatively large enumeration units.
27 CFR 555.224 - Table of distances for the storage of display fireworks (except bulk salutes).
Code of Federal Regulations, 2010 CFR
2010-04-01
...). Net weight of firework 1 (pounds) Distance between magazine and inhabited building, passenger railway, or public highway 3,4 (feet) Distance between magazines 2,3 (feet) 0-1000 150 100 1001-5000 230 150... “magazine” also includes fireworks shipping buildings for display fireworks. 3 For fireworks storage...
Yeung, S; Genaidy, A; Deddens, J; Shoaf, C; Leung, P
2003-01-01
Aims: To investigate the use of a worker based methodology to assess the physical stresses of lifting tasks on effort expended, and to associate this loading with musculoskeletal outcomes (MO). Methods: A cross sectional study was conducted on 217 male manual handling workers from the Hong Kong area. The effects of four lifting variables (weight of load, horizontal distance, twisting angle, and vertical travel distance) on effort were examined using a linguistic approach (that is, characterising variables in descriptors such as "heavy" for weight of load). The numerical interpretations of linguistic descriptors were established. In addition, the associations between on the job effort and MO were investigated for 10 body regions including the spine, and both upper and lower extremities. Results: MO were prevalent in multiple body regions (range 12–58%); effort was significantly associated with MO in 8 of 10 body regions (odds ratios with age adjusted ranged from 1.31 for low back to 1.71 for elbows and forearm). The lifting task variables had significant effects on effort, with the weight of load having twice the effect of other variables; each linguistic descriptor was better described by a range of numerical values rather than a single numerical value. Conclusions: The participatory worker based approach on musculoskeletal outcomes is a promising methodology. Further testing of this approach is recommended. PMID:14504360
Quantitative Structure Retention Relationships of Polychlorinated Dibenzodioxins and Dibenzofurans
1991-08-01
be a projection onto the X-Y plane. The algorithm for this calculation can be found in Stouch and Jurs (22), but was further refined by Rohrbaugh and...throughspace distances. WPSA2 (c) Weighted positive charged surface area. MOMH2 (c) Second major moment of inertia with hydrogens attached. CSTR 3 (d) Sum...of the models. The robust regression analysis method calculates a regression model using a least median squares algorithm which is not as susceptible
Kandhasamy, Chandrasekaran; Ghosh, Kaushik
2017-02-01
Indian states are currently classified into HIV-risk categories based on the observed prevalence counts, percentage of infected attendees in antenatal clinics, and percentage of infected high-risk individuals. This method, however, does not account for the spatial dependence among the states nor does it provide any measure of statistical uncertainty. We provide an alternative model-based approach to address these issues. Our method uses Poisson log-normal models having various conditional autoregressive structures with neighborhood-based and distance-based weight matrices and incorporates all available covariate information. We use R and WinBugs software to fit these models to the 2011 HIV data. Based on the Deviance Information Criterion, the convolution model using distance-based weight matrix and covariate information on female sex workers, literacy rate and intravenous drug users is found to have the best fit. The relative risk of HIV for the various states is estimated using the best model and the states are then classified into the risk categories based on these estimated values. An HIV risk map of India is constructed based on these results. The choice of the final model suggests that an HIV control strategy which focuses on the female sex workers, intravenous drug users and literacy rate would be most effective. Copyright © 2017 Elsevier Ltd. All rights reserved.
Method for star identification using neural networks
NASA Astrophysics Data System (ADS)
Lindsey, Clark S.; Lindblad, Thomas; Eide, Age J.
1997-04-01
Identification of star constellations with an onboard star tracker provides the highest precision of all attitude determination techniques for spacecraft. A method for identification of star constellations inspired by neural network (NNW) techniques is presented. It compares feature vectors derived from histograms of distances to multiple stars around the unknown star. The NNW method appears most robust with respect to position noise and would require a smaller database than conventional methods, especially for small fields of view. The neural network method is quite slow when performed on a sequential (serial) processor, but would provide very high speed if implemented in special hardware. Such hardware solutions could also yield lower low weight and low power consumption, both important features for small satellites.
Asymmetric network connectivity using weighted harmonic averages
NASA Astrophysics Data System (ADS)
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
A technique for integrating engine cycle and aircraft configuration optimization
NASA Technical Reports Server (NTRS)
Geiselhart, Karl A.
1994-01-01
A method for conceptual aircraft design that incorporates the optimization of major engine design variables for a variety of cycle types was developed. The methodology should improve the lengthy screening process currently involved in selecting an appropriate engine cycle for a given application or mission. The new capability will allow environmental concerns such as airport noise and emissions to be addressed early in the design process. The ability to rapidly perform optimization and parametric variations using both engine cycle and aircraft design variables, and to see the impact on the aircraft, should provide insight and guidance for more detailed studies. A brief description of the aircraft performance and mission analysis program and the engine cycle analysis program that were used is given. A new method of predicting propulsion system weight and dimensions using thermodynamic cycle data, preliminary design, and semi-empirical techniques is introduced. Propulsion system performance and weights data generated by the program are compared with industry data and data generated using well established codes. The ability of the optimization techniques to locate an optimum is demonstrated and some of the problems that had to be solved to accomplish this are illustrated. Results from the application of the program to the analysis of three supersonic transport concepts installed with mixed flow turbofans are presented. The results from the application to a Mach 2.4, 5000 n.mi. transport indicate that the optimum bypass ratio is near 0.45 with less than 1 percent variation in minimum gross weight for bypass ratios ranging from 0.3 to 0.6. In the final application of the program, a low sonic boom fix a takeoff gross weight concept that would fly at Mach 2.0 overwater and at Mach 1.6 overland is compared with a baseline concept of the same takeoff gross weight that would fly Mach 2.4 overwater and subsonically overland. The results indicate that for the design mission, the low boom concept has a 5 percent total range penalty relative to the baseline. Additional cycles were optimized for various design overland distances and the effect of flying off-design overland distances is illustrated.
Combining coordination of motion actuators with driver steering interaction.
Tagesson, Kristoffer; Laine, Leo; Jacobson, Bengt
2015-01-01
A new method is suggested for coordination of vehicle motion actuators; where driver feedback and capabilities become natural elements in the prioritization. The method is using a weighted least squares control allocation formulation, where driver characteristics can be added as virtual force constraints. The approach is in particular suitable for heavy commercial vehicles that in general are over actuated. The method is applied, in a specific use case, by running a simulation of a truck applying automatic braking on a split friction surface. Here the required driver steering angle, to maintain the intended direction, is limited by a constant threshold. This constant is automatically accounted for when balancing actuator usage in the method. Simulation results show that the actual required driver steering angle can be expected to match the set constant well. Furthermore, the stopping distance is very much affected by this set capability of the driver to handle the lateral disturbance, as expected. In general the capability of the driver to handle disturbances should be estimated in real-time, considering driver mental state. By using the method it will then be possible to estimate e.g. stopping distance implied from this. The setup has the potential of even shortening the stopping distance, when the driver is estimated as active, this compared to currently available systems. The approach is feasible for real-time applications and requires only measurable vehicle quantities for parameterization. Examples of other suitable applications in scope of the method would be electronic stability control, lateral stability control at launch and optimal cornering arbitration.
DiLillo, Vicki; Ingle, Krista; Harvey, Jean Ruth; West, Delia Smith
2016-01-01
Background While Internet-based weight management programs can facilitate access to and engagement in evidence-based lifestyle weight loss programs, the results have generally not been as effective as in-person programs. Furthermore, motivational interviewing (MI) has shown promise as a technique for enhancing weight loss outcomes within face-to-face programs. Objective This paper describes the design, intervention development, and analysis of a therapist-delivered online MI intervention for weight loss in the context of an online weight loss program. Methods The MI intervention is delivered within the context of a randomized controlled trial examining the efficacy of an 18-month, group-based, online behavioral weight control program plus individually administered, synchronous online MI sessions relative to the group-based program alone. Six individual 30-minute MI sessions are conducted in private chat rooms over 18 months by doctoral-level psychologists. Sessions use a semistructured interview format for content and session flow and incorporate core MI components (eg, collaborative agenda setting, open-ended questions, reflective listening and summary statements, objective data, and a focus on evoking and amplifying change talk). Results The project was funded in 2010 and enrollment was completed in 2012. Data analysis is currently under way and the first results are expected in 2016. Conclusions This is the first trial to test the efficacy of a synchronous online, one-on-one MI intervention designed to augment an online group behavioral weight loss program. If the addition of MI sessions proves to be successful, this intervention could be disseminated to enhance other distance-based weight loss interventions. Trial Registration Clinicaltrials.gov NCT01232699; https://clinicaltrials.gov/ct2/show/NCT01232699 PMID:27095604
Development and evaluation of modified envelope correlation method for deep tectonic tremor
NASA Astrophysics Data System (ADS)
Mizuno, N.; Ide, S.
2017-12-01
We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.
Effect of weight reduction on quality of life and eating behaviors in obese women.
Lemoine, Sophie; Rossell, Nadia; Drapeau, Vicky; Poulain, Magali; Garnier, Sophie; Sanguignol, Frédéric; Mauriège, Pascale
2007-01-01
To examine the impact of a 3-week weight-reducing program on body composition, physical condition, health-related quality of life, and eating behaviors of sedentary, obese (body mass index, 29-35 kg/m) women, according to menopausal status and menopause duration (<5, >or=5, and >or=10 y). Thirteen premenopausal and 27 postmenopausal women received a dietary plan of 1,400 +/- 200 kcal/day and completed 110-minute endurance exercise 6 days per week. Body mass index, fat mass, lean mass, distance walked in the Six-Minute Walk Test, health-related quality of life estimated by the 36-item Short Form Health Survey (SF-36), and eating behaviors (restriction, disinhibition, and susceptibility to hunger) assessed by the Three-Factor Eating Questionnaire were determined before and after weight reduction. Body mass index and fat mass decreased (P < 0.0001), whereas distance walked increased in both groups after weight reduction (P < 0.001). Although the SF-36 mental component score increased after weight loss in both groups (P < 0.0001), the SF-36 physical component score increased in postmenopausal women only (P < 0.001). Restriction increased (P < 0.0001), whereas disinhibition and susceptibility to hunger decreased after weight reduction (P < 0.001 and P < 0.01, respectively) in both groups. Distance walked and SF-36 physical component score after weight loss were higher in women whose menopause ranged between 5 and 9 years and exceeded 10 years, respectively (P < 0.01). Our study shows that a short-term weight-reducing program combining caloric restriction and physical activity has a favorable impact on women's body composition, physical condition, health-related quality of life, and eating behaviors irrespective of their menopausal status.
Rezaeian, Sanaz; Hartzell, Stephen; Sun, Xiaodan; Mendoza, Carlos
2017-01-01
Earthquake ground‐motion recordings are scarce in the central and eastern United States (CEUS) for large‐magnitude events and at close distances. We use two different simulation approaches, a deterministic physics‐based method and a site‐based stochastic method, to simulate ground motions over a wide range of magnitudes. Drawing on previous results for the modeling of recordings from the 2011 Mw 5.8 Mineral, Virginia, earthquake and using the 2001 Mw 7.6 Bhuj, India, earthquake as a tectonic analog for a large magnitude CEUS event, we are able to calibrate the two simulation methods over this magnitude range. Both models show a good fit to the Mineral and Bhuj observations from 0.1 to 10 Hz. Model parameters are then adjusted to obtain simulations for Mw 6.5, 7.0, and 7.6 events in the CEUS. Our simulations are compared with the 2014 U.S. Geological Survey weighted combination of existing ground‐motion prediction equations in the CEUS. The physics‐based simulations show comparable response spectral amplitudes and a fairly similar attenuation with distance. The site‐based stochastic simulations suggest a slightly faster attenuation of the response spectral amplitudes with distance for larger magnitude events and, as a result, slightly lower amplitudes at distances greater than 200 km. Both models are plausible alternatives and, given the few available data points in the CEUS, can be used to represent the epistemic uncertainty in modeling of postulated CEUS large‐magnitude events.
Multi-instance multi-label distance metric learning for genome-wide protein function prediction.
Xu, Yonghui; Min, Huaqing; Song, Hengjie; Wu, Qingyao
2016-08-01
Multi-instance multi-label (MIML) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with not only multiple instances but also multiple class labels. To find an appropriate MIML learning method for genome-wide protein function prediction, many studies in the literature attempted to optimize objective functions in which dissimilarity between instances is measured using the Euclidean distance. But in many real applications, Euclidean distance may be unable to capture the intrinsic similarity/dissimilarity in feature space and label space. Unlike other previous approaches, in this paper, we propose to learn a multi-instance multi-label distance metric learning framework (MIMLDML) for genome-wide protein function prediction. Specifically, we learn a Mahalanobis distance to preserve and utilize the intrinsic geometric information of both feature space and label space for MIML learning. In addition, we try to deal with the sparsely labeled data by giving weight to the labeled data. Extensive experiments on seven real-world organisms covering the biological three-domain system (i.e., archaea, bacteria, and eukaryote; Woese et al., 1990) show that the MIMLDML algorithm is superior to most state-of-the-art MIML learning algorithms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Space Use of Bumblebees (Bombus spp.) Revealed by Radio-Tracking
Hagen, Melanie; Wikelski, Martin; Kissling, W. Daniel
2011-01-01
Background Accurate estimates of movement behavior and distances travelled by animals are difficult to obtain, especially for small-bodied insects where transmitter weights have prevented the use of radio-tracking. Methodology/Principal Findings Here, we report the first successful use of micro radio telemetry to track flight distances and space use of bumblebees. Using ground surveys and Cessna overflights in a Central European rural landscape mosaic we obtained maximum flight distances of 2.5 km, 1.9 km and 1.3 km for Bombus terrestris (workers), Bombus ruderatus (worker), and Bombus hortorum (young queens), respectively. Bumblebee individuals used large areas (0.25–43.53 ha) within one or a few days. Habitat analyses of one B. hortorum queen at the landscape scale indicated that gardens within villages were used more often than expected from habitat availability. Detailed movement trajectories of this individual revealed that prominent landscape structures (e.g. trees) and flower patches were repeatedly visited. However, we also observed long (i.e. >45 min) resting periods between flights (B. hortorum) and differences in flower-handling between bumblebees with and without transmitters (B. terrestris) suggesting that the current weight of transmitters (200 mg) may still impose significant energetic costs on the insects. Conclusions/Significance Spatio-temporal movements of bumblebees can now be tracked with telemetry methods. Our measured flight distances exceed many previous estimates of bumblebee foraging ranges and suggest that travelling long distances to food resources may be common. However, even the smallest currently available transmitters still appear to compromise flower handling performance and cause an increase in resting behavior of bees. Future reductions of transmitter mass and size could open up new avenues for quantifying landscape-scale space use of insect pollinators and could provide novel insights into the behavior and requirements of bumblebees during critical life stages, e.g. when searching for mates, nest locations or hibernation sites. PMID:21603569
A Novel Method for Reconstructing Broken Contour Lines Extracted from Scanned Topographic Maps
NASA Astrophysics Data System (ADS)
Wang, Feng; Liu, Pingzhi; Yang, Yun; Wei, Haiping; An, Xiaoya
2018-05-01
It is known that after segmentation and morphological operations on scanned topographic maps, gaps occur in contour lines. It is also well known that filling these gaps and reconstruction of contour lines with high accuracy and completeness is not an easy problem. In this paper, a novel method is proposed dedicated in automatic or semiautomatic filling up caps and reconstructing broken contour lines in binary images. The key part of end points' auto-matching and reconnecting is deeply discussed after introducing the procedure of reconstruction, in which some key algorithms and mechanisms are presented and realized, including multiple incremental backing trace to get weighted average direction angle of end points, the max constraint angle control mechanism based on the multiple gradient ranks, combination of weighted Euclidean distance and deviation angle to determine the optimum matching end point, bidirectional parabola control, etc. Lastly, experimental comparisons based on typically samples are complemented between proposed method and the other representative method, the results indicate that the former holds higher accuracy and completeness, better stability and applicability.
Miklós, István
2003-10-01
As more and more genomes have been sequenced, genomic data is rapidly accumulating. Genome-wide mutations are believed more neutral than local mutations such as substitutions, insertions and deletions, therefore phylogenetic investigations based on inversions, transpositions and inverted transpositions are less biased by the hypothesis on neutral evolution. Although efficient algorithms exist for obtaining the inversion distance of two signed permutations, there is no reliable algorithm when both inversions and transpositions are considered. Moreover, different type of mutations happen with different rates, and it is not clear how to weight them in a distance based approach. We introduce a Markov Chain Monte Carlo method to genome rearrangement based on a stochastic model of evolution, which can estimate the number of different evolutionary events needed to sort a signed permutation. The performance of the method was tested on simulated data, and the estimated numbers of different types of mutations were reliable. Human and Drosophila mitochondrial data were also analysed with the new method. The mixing time of the Markov Chain is short both in terms of CPU times and number of proposals. The source code in C is available on request from the author.
NASA Astrophysics Data System (ADS)
Castro, Marcelo A.; Williford, Joshua P.; Cota, Martin R.; MacLaren, Judy M.; Dardzinski, Bernard J.; Latour, Lawrence L.; Pham, Dzung L.; Butman, John A.
2016-03-01
Traumatic meningeal injury is a novel imaging marker of traumatic brain injury, which appears as enhancement of the dura on post-contrast T2-weighted FLAIR images, and is likely associated with inflammation of the meninges. Dynamic Contrast Enhanced MRI provides a better discrimination of abnormally perfused regions. A method to properly identify those regions is presented. Images of seventeen patients scanned within 96 hours of head injury with positive traumatic meningeal injury were normalized and aligned. The difference between the pre- and last post-contrast acquisitions was segmented and voxels in the higher class were spatially clustered. Spatial and morphological descriptors were used to identify the regions of enhancement: a) centroid; b) distance to the brain mask from external voxels; c) distance from internal voxels; d) size; e) shape. The method properly identified thirteen regions among all patients. The method failed in one case due to the presence of a large brain lesion that altered the mask boundaries. Most false detections were correctly rejected resulting in a sensitivity and specificity of 92.9% and 93.6%, respectively.
3D active shape models of human brain structures: application to patient-specific mesh generation
NASA Astrophysics Data System (ADS)
Ravikumar, Nishant; Castro-Mateos, Isaac; Pozo, Jose M.; Frangi, Alejandro F.; Taylor, Zeike A.
2015-03-01
The use of biomechanics-based numerical simulations has attracted growing interest in recent years for computer-aided diagnosis and treatment planning. With this in mind, a method for automatic mesh generation of brain structures of interest, using statistical models of shape (SSM) and appearance (SAM), for personalised computational modelling is presented. SSMs are constructed as point distribution models (PDMs) while SAMs are trained using intensity profiles sampled from a training set of T1-weighted magnetic resonance images. The brain structures of interest are, the cortical surface (cerebrum, cerebellum & brainstem), lateral ventricles and falx-cerebri membrane. Two methods for establishing correspondences across the training set of shapes are investigated and compared (based on SSM quality): the Coherent Point Drift (CPD) point-set registration method and B-spline mesh-to-mesh registration method. The MNI-305 (Montreal Neurological Institute) average brain atlas is used to generate the template mesh, which is deformed and registered to each training case, to establish correspondence over the training set of shapes. 18 healthy patients' T1-weightedMRimages form the training set used to generate the SSM and SAM. Both model-training and model-fitting are performed over multiple brain structures simultaneously. Compactness and generalisation errors of the BSpline-SSM and CPD-SSM are evaluated and used to quantitatively compare the SSMs. Leave-one-out cross validation is used to evaluate SSM quality in terms of these measures. The mesh-based SSM is found to generalise better and is more compact, relative to the CPD-based SSM. Quality of the best-fit model instance from the trained SSMs, to test cases are evaluated using the Hausdorff distance (HD) and mean absolute surface distance (MASD) metrics.
NASA Technical Reports Server (NTRS)
Levin, Alan D.; Hopkins, Edward J.
1961-01-01
An analysis was made to determine the reduction in payload for a 300 nautical mile orbit resulting from the addition of inert weight, representing recovery gear, to the first-stage booster of a three-stage rocket vehicle. The values of added inert weight investigated ranged from 0 to 18 percent of gross weight at lift off. The study also included the effects on the payload in orbit and the distance from the launch site at burnout and at impact caused by variation in the vertical rise time before the programmed tilt. The vertical rise times investigated ranged from 16-7 to 100 percent of booster burning time. For a vertical rise of 16.7 percent of booster burning time it was found that a 50-percent increase in the weight of the empty booster resulted in only a 10-percent reduction of the payload in orbit. For no added booster weight, increasing vertical rise time from 16-7 to 100 percent of booster burning time (so that the spent booster would impact in the launch area) reduced the payload by 37 percent. Increasing the vertical rise time from 16-7 to 50 percent of booster burning time resulted in about a 15-percent reduction in the impact distance, and for vertical rise times greater than 50-percent the impact distance decreased rapidly.
Total Bregman Divergence and its Applications to Shape Retrieval.
Liu, Meizhu; Vemuri, Baba C; Amari, Shun-Ichi; Nielsen, Frank
2010-01-01
Shape database search is ubiquitous in the world of biometric systems, CAD systems etc. Shape data in these domains is experiencing an explosive growth and usually requires search of whole shape databases to retrieve the best matches with accuracy and efficiency for a variety of tasks. In this paper, we present a novel divergence measure between any two given points in [Formula: see text] or two distribution functions. This divergence measures the orthogonal distance between the tangent to the convex function (used in the definition of the divergence) at one of its input arguments and its second argument. This is in contrast to the ordinate distance taken in the usual definition of the Bregman class of divergences [4]. We use this orthogonal distance to redefine the Bregman class of divergences and develop a new theory for estimating the center of a set of vectors as well as probability distribution functions. The new class of divergences are dubbed the total Bregman divergence (TBD). We present the l 1 -norm based TBD center that is dubbed the t-center which is then used as a cluster center of a class of shapes The t-center is weighted mean and this weight is small for noise and outliers. We present a shape retrieval scheme using TBD and the t-center for representing the classes of shapes from the MPEG-7 database and compare the results with other state-of-the-art methods in literature.
An interpolation method for stream habitat assessments
Sheehan, Kenneth R.; Welsh, Stuart A.
2015-01-01
Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.
Testing local anisotropy using the method of smoothed residuals I — methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appleby, Stephen; Shafieloo, Arman, E-mail: stephen.appleby@apctp.org, E-mail: arman@apctp.org
2014-03-01
We discuss some details regarding the method of smoothed residuals, which has recently been used to search for anisotropic signals in low-redshift distance measurements (Supernovae). In this short note we focus on some details regarding the implementation of the method, particularly the issue of effectively detecting signals in data that are inhomogeneously distributed on the sky. Using simulated data, we argue that the original method proposed in Colin et al. [1] will not detect spurious signals due to incomplete sky coverage, and that introducing additional Gaussian weighting to the statistic as in [2] can hinder its ability to detect amore » signal. Issues related to the width of the Gaussian smoothing are also discussed.« less
Vegetated land cover near residence is associated with ...
Abstract Background: Greater exposure to urban green spaces has been linked to reduced risks of depression, cardiovascular disease, diabetes and premature death. Alleviation of chronic stress is a hypothesized pathway to improved health. Previous studies linked chronic stress with biomarker-based measures of physiological dysregulation known as allostatic load. This study aimed to assess the relationship between vegetated land cover near residences and allostatic load. Methods: This cross-sectional population-based study involved 204 adult residents of the Durham-Chapel Hill, North Carolina metropolitan area. Exposure was quantified using high-resolution metrics of trees and herbaceous vegetation within 500 m of each residence derived from the U.S. Environmental Protection Agency’s EnviroAtlas land cover dataset. Eighteen biomarkers of immune, neuroendocrine, and metabolic functions were measured in serum or saliva samples. Allostatic load was defined as a sum of biomarker values dichotomized at specific percentiles of sample distribution. Regression analysis was conducted using generalized additive models with two-dimensional spline smoothing function of geographic coordinates, weighted measures of vegetated land cover allowing decay of effects with distance, and geographic and demographic covariates. Results: An inter-quartile range increase in distance-weighted vegetated land cover was associated with 37% (46%; 27%) reduced allostatic load; significantly
NASA Astrophysics Data System (ADS)
Gu, Yameng; Zhang, Xuming
2017-05-01
Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).
Associative memory - An optimum binary neuron representation
NASA Technical Reports Server (NTRS)
Awwal, A. A.; Karim, M. A.; Liu, H. K.
1989-01-01
Convergence mechanism of vectors in the Hopfield's neural network is studied in terms of both weights (i.e., inner products) and Hamming distance. It is shown that Hamming distance should not always be used in determining the convergence of vectors. Instead, weights (which in turn depend on the neuron representation) are found to play a more dominant role in the convergence mechanism. Consequently, a new binary neuron representation for associative memory is proposed. With the new neuron representation, the associative memory responds unambiguously to the partial input in retrieving the stored information.
Francis, Alexander L.; Kaganovich, Natalya; Driscoll-Huber, Courtney
2008-01-01
In English, voiced and voiceless syllable-initial stop consonants differ in both fundamental frequency at the onset of voicing (onset F0) and voice onset time (VOT). Although both correlates, alone, can cue the voicing contrast, listeners weight VOT more heavily when both are available. Such differential weighting may arise from differences in the perceptual distance between voicing categories along the VOT versus onset F0 dimensions, or it may arise from a bias to pay more attention to VOT than to onset F0. The present experiment examines listeners’ use of these two cues when classifying stimuli in which perceptual distance was artificially equated along the two dimensions. Listeners were also trained to categorize stimuli based on one cue at the expense of another. Equating perceptual distance eliminated the expected bias toward VOT before training, but successfully learning to base decisions more on VOT and less on onset F0 was easier than vice versa. Perceptual distance along both dimensions increased for both groups after training, but only VOT-trained listeners showed a decrease in Garner interference. Results lend qualified support to an attentional model of phonetic learning in which learning involves strategic redeployment of selective attention across integral acoustic cues. PMID:18681610
NASA Astrophysics Data System (ADS)
Hong, H.; Zhu, A. X.
2017-12-01
Climate change is a common phenomenon and it is very serious all over the world. The intensification of rainfall extremes with climate change is of key importance to society and then it may induce a large impact through landslides. This paper presents GIS-based new ensemble data mining techniques that weight-of-evidence, logistic model tree, linear and quadratic discriminant for landslide spatial modelling. This research was applied in Anfu County, which is a landslide-prone area in Jiangxi Province, China. According to a literature review and research the study area, we select the landslide influencing factor and their maps were digitized in a GIS environment. These landslide influencing factors are the altitude, plan curvature, profile curvature, slope degree, slope aspect, topographic wetness index (TWI), Stream Power Index (SPI), Topographic Wetness Index (SPI), distance to faults, distance to rivers, distance to roads, soil, lithology, normalized difference vegetation index and land use. According to historical information of individual landslide events, interpretation of the aerial photographs, and field surveys supported by the government of Jiangxi Meteorological Bureau of China, 367 landslides were identified in the study area. The landslide locations were divided into two subsets, namely, training and validating (70/30), based on a random selection scheme. In this research, Pearson's correlation was used for the evaluation of the relationship between the landslides and influencing factors. In the next step, three data mining techniques combined with the weight-of-evidence, logistic model tree, linear and quadratic discriminant, were used for the landslide spatial modelling and its zonation. Finally, the landslide susceptibility maps produced by the mentioned models were evaluated by the ROC curve. The results showed that the area under the curve (AUC) of all of the models was > 0.80. At the same time, the highest AUC value was for the linear and quadratic discriminant model (0.864), followed by logistic model tree (0.832), and weight-of-evidence (0.819). In general, the landslide maps can be applied for land use planning and management in the Anfu area.
A similarity retrieval approach for weighted track and ambient field of tropical cyclones
NASA Astrophysics Data System (ADS)
Li, Ying; Xu, Luan; Hu, Bo; Li, Yuejun
2018-03-01
Retrieving historical tropical cyclones (TC) which have similar position and hazard intensity to the objective TC is an important means in TC track forecast and TC disaster assessment. A new similarity retrieval scheme is put forward based on historical TC track data and ambient field data, including ERA-Interim reanalysis and GFS and EC-fine forecast. It takes account of both TC track similarity and ambient field similarity, and optimal weight combination is explored subsequently. Result shows that both the distance and direction errors of TC track forecast at 24-hour timescale follow an approximately U-shape distribution. They tend to be large when the weight assigned to track similarity is close to 0 or 1.0, while relatively small when track similarity weight is from 0.2˜0.7 for distance error and 0.3˜0.6 for direction error.
NASA Astrophysics Data System (ADS)
Saslow, Wayne M.
2014-04-01
Three common approaches to F→=ma→ are: (1) as an exactly true definition of force F→ in terms of measured inertial mass m and measured acceleration a→; (2) as an exactly true axiom relating measured values of a→, F→ and m; and (3) as an imperfect but accurately true physical law relating measured a→ to measured F→, with m an experimentally determined, matter-dependent constant, in the spirit of the resistance R in Ohm's law. In the third case, the natural units are those of a→ and F→, where a→ is normally specified using distance and time as standard units, and F→ from a spring scale as a standard unit; thus mass units are derived from force, distance, and time units such as newtons, meters, and seconds. The present work develops the third approach when one includes a second physical law (again, imperfect but accurate)—that balance-scale weight W is proportional to m—and the fact that balance-scale measurements of relative weight are more accurate than those of absolute force. When distance and time also are more accurately measurable than absolute force, this second physical law permits a shift to standards of mass, distance, and time units, such as kilograms, meters, and seconds, with the unit of force—the newton—a derived unit. However, were force and distance more accurately measurable than time (e.g., time measured with an hourglass), this second physical law would permit a shift to standards of force, mass, and distance units such as newtons, kilograms, and meters, with the unit of time—the second—a derived unit. Therefore, the choice of the most accurate standard units depends both on what is most accurately measurable and on the accuracy of physical law.
Lu, Jian-Yu; Cheng, Jiqi; Wang, Jing
2006-10-01
A general-purpose high frame rate (HFR) medical imaging system has been developed. This system has 128 independent linear transmitters, each of which is capable of producing an arbitrary broadband (about 0.05-10 MHz) waveform of up to +/- 144 V peak voltage on a 75-ohm resistive load using a 12-bit/40-MHz digital-to-analog converter. The system also has 128 independent, broadband (about 0.25-10 MHz), and time-variable-gain receiver channels, each of which has a 12-bit/40-MHz analog-to-digital converter and up to 512 MB of memory. The system is controlled by a personal computer (PC), and radio frequency echo data of each channel are transferred to the same PC via a standard USB 2.0 port for image reconstructions. Using the HFR imaging system, we have developed a new limited-diffraction array beam imaging method with square-wave aperture voltage weightings. With this method, in principle, only one or two transmitters are required to excite a fully populated two-dimensional (2-D) array transducer to achieve an equivalent dynamic focusing in both transmission and reception to reconstruct a high-quality three-dimensional image without the need of the time delays of traditional beam focusing and steering, potentially simplifying the transmitter subsystem of an imager. To validate the method, for simplicity, 2-D imaging experiments were performed using the system. In the in vitro experiment, a custom-made, 128-element, 0.32-mm pitch, 3.5-MHz center frequency linear array transducer with about 50% fractional bandwidth was used to reconstruct images of an ATS 539 tissue-mimicking phantom at an axial distance of 130 mm with a field of view of more than 90 degrees. In the in vivo experiment of a human heart, images with a field of view of more than 90 degrees at 120-mm axial distance were obtained with a 128-element, 2.5-MHz center frequency, 0.15-mm pitch Acuson V2 phased array. To ensure that the system was operated under the limits set by the U.S. Food and Drug Administration, the mechanical index, thermal index, and acoustic output were measured. Results show that higher-quality images can be reconstructed with the square-wave aperture weighting method due to an increased penetration depth as compared to the exact weighting method developed previously, and a frame rate of 486 per second was achieved at a pulse repetition frequency of about 5348 Hz for the human heart.
PATL: A RFID Tag Localization based on Phased Array Antenna.
Qiu, Lanxin; Liang, Xiaoxuan; Huang, Zhangqin
2017-03-15
In RFID systems, how to detect the position precisely is an important and challenging research topic. In this paper, we propose a range-free 2D tag localization method based on phased array antenna, called PATL. This method takes advantage of the adjustable radiation angle of the phased array antenna to scan the surveillance region in turns. By using the statistics of the tags' number in different antenna beam directions, a weighting algorithm is used to calculate the position of the tag. This method can be applied to real-time location of multiple targets without usage of any reference tags or additional readers. Additionally, we present an optimized weighting method based on RSSI to increase the locating accuracy. We use a Commercial Off-the-Shelf (COTS) UHF RFID reader which is integrated with a phased array antenna to evaluate our method. The experiment results from an indoor office environment demonstrate the average distance error of PATL is about 21 cm and the optimized approach achieves an accuracy of 13 cm. This novel 2D localization scheme is a simple, yet promising, solution that is especially applicable to the smart shelf visualized management in storage or retail area.
PATL: A RFID Tag Localization based on Phased Array Antenna
Qiu, Lanxin; Liang, Xiaoxuan; Huang, Zhangqin
2017-01-01
In RFID systems, how to detect the position precisely is an important and challenging research topic. In this paper, we propose a range-free 2D tag localization method based on phased array antenna, called PATL. This method takes advantage of the adjustable radiation angle of the phased array antenna to scan the surveillance region in turns. By using the statistics of the tags’ number in different antenna beam directions, a weighting algorithm is used to calculate the position of the tag. This method can be applied to real-time location of multiple targets without usage of any reference tags or additional readers. Additionally, we present an optimized weighting method based on RSSI to increase the locating accuracy. We use a Commercial Off-the-Shelf (COTS) UHF RFID reader which is integrated with a phased array antenna to evaluate our method. The experiment results from an indoor office environment demonstrate the average distance error of PATL is about 21 cm and the optimized approach achieves an accuracy of 13 cm. This novel 2D localization scheme is a simple, yet promising, solution that is especially applicable to the smart shelf visualized management in storage or retail area. PMID:28295014
Sharkey, Joseph R; Horel, Scott; Han, Daikwon; Huber, John C
2009-02-16
To determine the extent to which neighborhood needs (socioeconomic deprivation and vehicle availability) are associated with two criteria of food environment access: 1) distance to the nearest food store and fast food restaurant and 2) coverage (number) of food stores and fast food restaurants within a specified network distance of neighborhood areas of colonias, using ground-truthed methods. Data included locational points for 315 food stores and 204 fast food restaurants, and neighborhood characteristics from the 2000 U.S. Census for the 197 census block group (CBG) study area. Neighborhood deprivation and vehicle availability were calculated for each CBG. Minimum distance was determined by calculating network distance from the population-weighted center of each CBG to the nearest supercenter, supermarket, grocery, convenience store, dollar store, mass merchandiser, and fast food restaurant. Coverage was determined by calculating the number of each type of food store and fast food restaurant within a network distance of 1, 3, and 5 miles of each population-weighted CBG center. Neighborhood need and access were examined using Spearman ranked correlations, spatial autocorrelation, and multivariate regression models that adjusted for population density. Overall, neighborhoods had best access to convenience stores, fast food restaurants, and dollar stores. After adjusting for population density, residents in neighborhoods with increased deprivation had to travel a significantly greater distance to the nearest supercenter or supermarket, grocery store, mass merchandiser, dollar store, and pharmacy for food items. The results were quite different for association of need with the number of stores within 1 mile. Deprivation was only associated with fast food restaurants; greater deprivation was associated with fewer fast food restaurants within 1 mile. CBG with greater lack of vehicle availability had slightly better access to more supercenters or supermarkets, grocery stores, or fast food restaurants. Increasing deprivation was associated with decreasing numbers of grocery stores, mass merchandisers, dollar stores, and fast food restaurants within 3 miles. It is important to understand not only the distance that people must travel to the nearest store to make a purchase, but also how many shopping opportunities they have in order to compare price, quality, and selection. Future research should examine how spatial access to the food environment influences the utilization of food stores and fast food restaurants, and the strategies used by low-income families to obtain food for the household.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tardiff, Mark F.; Runkle, Robert C.; Anderson, K. K.
2006-01-23
The goal of primary radiation monitoring in support of routine screening and emergency response is to detect characteristics in vehicle radiation signatures that indicate the presence of potential threats. Two conceptual approaches to analyzing gamma-ray spectra for threat detection are isotope identification and anomaly detection. While isotope identification is the time-honored method, an emerging technique is anomaly detection that uses benign vehicle gamma ray signatures to define an expectation of the radiation signature for vehicles that do not pose a threat. Newly acquired spectra are then compared to this expectation using statistical criteria that reflect acceptable false alarm rates andmore » probabilities of detection. The gamma-ray spectra analyzed here were collected at a U.S. land Port of Entry (POE) using a NaI-based radiation portal monitor (RPM). The raw data were analyzed to develop a benign vehicle expectation by decimating the original pulse-height channels to 35 energy bins, extracting composite variables via principal components analysis (PCA), and estimating statistically weighted distances from the mean vehicle spectrum with the mahalanobis distance (MD) metric. This paper reviews the methods used to establish the anomaly identification criteria and presents a systematic analysis of the response of the combined PCA and MD algorithm to modeled mono-energetic gamma-ray sources.« less
LaManna, Joseph A; Mangan, Scott A; Alonso, Alfonso; Bourg, Norman A; Brockelman, Warren Y; Bunyavejchewin, Sarayudh; Chang, Li-Wan; Chiang, Jyh-Min; Chuyong, George B; Clay, Keith; Cordell, Susan; Davies, Stuart J; Furniss, Tucker J; Giardina, Christian P; Gunatilleke, I A U Nimal; Gunatilleke, C V Savitri; He, Fangliang; Howe, Robert W; Hubbell, Stephen P; Hsieh, Chang-Fu; Inman-Narahari, Faith M; Janík, David; Johnson, Daniel J; Kenfack, David; Korte, Lisa; Král, Kamil; Larson, Andrew J; Lutz, James A; McMahon, Sean M; McShea, William J; Memiaghe, Hervé R; Nathalang, Anuttara; Novotny, Vojtech; Ong, Perry S; Orwig, David A; Ostertag, Rebecca; Parker, Geoffrey G; Phillips, Richard P; Sack, Lawren; Sun, I-Fang; Tello, J Sebastián; Thomas, Duncan W; Turner, Benjamin L; Vela Díaz, Dilys M; Vrška, Tomáš; Weiblen, George D; Wolf, Amy; Yap, Sandra; Myers, Jonathan A
2018-05-25
Chisholm and Fung claim that our method of estimating conspecific negative density dependence (CNDD) in recruitment is systematically biased, and present an alternative method that shows no latitudinal pattern in CNDD. We demonstrate that their approach produces strongly biased estimates of CNDD, explaining why they do not detect a latitudinal pattern. We also address their methodological concerns using an alternative distance-weighted approach, which supports our original findings of a latitudinal gradient in CNDD and a latitudinal shift in the relationship between CNDD and species abundance. Copyright © 2018, American Association for the Advancement of Science.
Estimating monthly temperature using point based interpolation techniques
NASA Astrophysics Data System (ADS)
Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi
2013-04-01
This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.
A comparison of different interpolation methods for wind data in Central Asia
NASA Astrophysics Data System (ADS)
Reinhardt, Katja; Samimi, Cyrus
2017-04-01
For the assessment of the global climate change and its consequences, the results of computer based climate models are of central importance. The quality of these results and the validity of the derived forecasts are strongly determined by the quality of the underlying climate data. However, in many parts of the world high resolution data are not available. This is particularly true for many regions in Central Asia, where the density of climatological stations has often to be described as thinned out. Due to this insufficient data base the use of statistical methods to improve the resolution of existing climate data is of crucial importance. Only this can provide a substantial data base for a well-founded analysis of past climate changes as well as for a reliable forecast of future climate developments for the particular region. The study presented here shows a comparison of different interpolation methods for the wind components u and v for a region in Central Asia with a pronounced topography. The aim of the study is to find out whether there is an optimal interpolation method which can equally be applied for all pressure levels or if different interpolation methods have to be applied for each pressure level. The European reanalysis data Era-Interim for the years 1989 - 2015 are used as input data for the pressure levels of 850 hPa, 500 hPa and 200 hPa. In order to improve the input data, two different interpolation procedures were applied: On the one hand pure interpolation methods were used, such as inverse distance weighting and ordinary kriging. On the other hand machine learning algorithms, generalized additive models and regression kriging were applied, considering additional influencing factors, e.g. geopotential and topography. As a result it can be concluded that regression kriging provides the best results for all pressure levels, followed by support vector machine, neural networks and ordinary kriging. Inverse distance weighting showed the worst results.
Kinematic Distances: A Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wenger, Trey V.; Balser, Dana S.; Anderson, L. D.; Bania, T. M.
2018-03-01
Distances to high-mass star-forming regions (HMSFRs) in the Milky Way are a crucial constraint on the structure of the Galaxy. Only kinematic distances are available for a majority of the HMSFRs in the Milky Way. Here, we compare the kinematic and parallax distances of 75 Galactic HMSFRs to assess the accuracy of kinematic distances. We derive the kinematic distances using three different methods: the traditional method using the Brand & Blitz rotation curve (Method A), the traditional method using the Reid et al. rotation curve and updated solar motion parameters (Method B), and a Monte Carlo technique (Method C). Methods B and C produce kinematic distances closest to the parallax distances, with median differences of 13% (0.43 {kpc}) and 17% (0.42 {kpc}), respectively. Except in the vicinity of the tangent point, the kinematic distance uncertainties derived by Method C are smaller than those of Methods A and B. In a large region of the Galaxy, the Method C kinematic distances constrain both the distances and the Galactocentric positions of HMSFRs more accurately than parallax distances. Beyond the tangent point along ℓ = 30°, for example, the Method C kinematic distance uncertainties reach a minimum of 10% of the parallax distance uncertainty at a distance of 14 {kpc}. We develop a prescription for deriving and applying the Method C kinematic distances and distance uncertainties. The code to generate the Method C kinematic distances is publicly available and may be utilized through an online tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Paul T.; Thompson, Paul D.
2006-01-06
Objective: To determine if exercise reduces body weight andto examine the dose-response relationships between changes in exerciseand changes in total and regional adiposity. Methods and Results:Questionnaires on weekly running distance and adiposity from a largeprospective study of 3,973 men and 1,444 women who quit running(detraining), 270 men and 146 women who started running (training) and420 men and 153 women who remained sedentary during 7.4 years offollow-up. There were significant inverse relationships between change inthe amount of vigorous exercise (km/wk run) and changes in weight and BMIin men (slope+-SE:-0.039+-0.005 kg and -0.012+-0.002 kg/m2 per km/wk,respectively) and older women (-0.060+-0.018 kg andmore » -0.022+-0.007 kg/m2per km/wk) who quit running, and in initially sedentary men(-0.098+-0.017 kg and -0.032+-0.005 kg/m2 per km/wk) and women(-0.062+-0.023 kg and -0.021+-0.008 kg/m2 per km/wk) who started running.Changes in waist circumference were also inversely related to changes inrunning distance in men who quit (-0.026+-0.005 cm per km/wk) or startedrunning (-0.078+-0.017 cm per km/wk). Conclusions. The initiation andcessation of vigorous exercise decrease and increase body weight andintra-abdominal fat, respectively, and these changes are proportional tothe change in exercise dose.« less
The softest sound levels of the human voice in normal subjects.
Šrámková, Hana; Granqvist, Svante; Herbst, Christian T; Švec, Jan G
2015-01-01
Accurate measurement of the softest sound levels of phonation presents technical and methodological challenges. This study aimed at (1) reliably obtaining normative data on sustained softest sound levels for the vowel [a:] at comfortable pitch; (2) comparing the results for different frequency and time weighting methods; and (3) refining the Union of European Phoniatricians' recommendation on allowed background noise levels for scientific and equipment manufacturers' purposes. Eighty healthy untrained participants (40 females, 40 males) were investigated in quiet rooms using a head-mounted microphone and a sound level meter at 30 cm distance. The one-second-equivalent sound levels were more stable and more representative for evaluating the softest sustained phonations than the fast-time-weighted levels. At 30 cm, these levels were in the range of 48-61 dB(C)/41-53 dB(A) for females and 49 - 64 dB(C)/35-53 dB(A) for males (5% to 95% quantile range). These ranges may serve as reference data in evaluating vocal normality. In order to reach a signal-to-noise ratio of at least 10 dB for more than 95% of the normal population, the background noise should be below 25 dB(A) and 38 dB(C), respectively, for the softest phonation measurements at 30 cm distance. For the A-weighting, this is 15 dB lower than the previously recommended value.
Willis, Erik A; Szabo-Reed, Amanda N; Ptomey, Lauren T; Steger, Felicia L; Honas, Jeffery J; Al-Hihi, Eyad M; Lee, Robert; Vansaghi, Lisa; Washburn, Richard A; Donnelly, Joseph E
2016-03-01
Management of obesity in the context of the primary care physician visit is of limited efficacy in part because of limited ability to engage participants in sustained behavior change between physician visits. Therefore, healthcare systems must find methods to address obesity that reach beyond the walls of clinics and hospitals and address the issues of lifestyle modification in a cost-conscious way. The dramatic increase in technology and online social networks may present healthcare providers with innovative ways to deliver weight management programs that could have an impact on health care at the population level. A randomized study will be conducted on 70 obese adults (BMI 30.0-45.0 kg/m(2)) to determine if weight loss (6 months) is equivalent between weight management interventions utilizing behavioral strategies by either a conference call or social media approach. The primary outcome, body weight, will be assessed at baseline and 6 months. Secondary outcomes including waist circumference, energy and macronutrient intake, and physical activity will be assessed on the same schedule. In addition, a cost analysis and process evaluation will be completed. Copyright © 2016 Elsevier Inc. All rights reserved.
Hanney, William J.; Kolber, Morey J.; Davies, George J.; Riemann, Bryan
2011-01-01
Introduction: Understanding the relationships between performance tests and sport activity is important to the rehabilitation specialist. The purpose of this study was two- fold: 1) To identify if relationships exist between tests of upper body strength and power (Single Arm Seated Shot Put, Timed Push-Up, Timed Modified Pull-Up, and The Davies Closed Kinetic Chain Upper Extremity Stability Test, and the softball throw for distance), 2) To determine which variable or group of variables best predicts the performance of a sport specific task (the softball throw for distance). Methods: One hundred eighty subjects (111 females and 69 males, aged 18-45 years) performed the 5 upper extremity tests. The Pearson product moment correlation and a stepwise regression were used to determine whether relationships existed between performance on the tests and which upper extremity test result best explained the performance on the softball throw for distance. Results: There were significant correlations (r=.33 to r=.70, p=0.001) between performance on all of the tests. The modified pull-up test was the best predictor of the performance on the softball throw for distance (r2= 48.7), explaining 48.7% of variation in performance. When weight, height, and age were added to the regression equation the r2 values increased to 64.5, 66.2, and 67.5 respectively. Conclusion: The results of this study indicate that several upper extremity tests demonstrate significant relationships with one another and with the softball throw for distance. The modified pull up test was the best predictor of performance on the softball throw for distance. PMID:21712942
Geographies of an Online Social Network.
Lengyel, Balázs; Varga, Attila; Ságvári, Bence; Jakobi, Ákos; Kertész, János
2015-01-01
How is online social media activity structured in the geographical space? Recent studies have shown that in spite of earlier visions about the "death of distance", physical proximity is still a major factor in social tie formation and maintenance in virtual social networks. Yet, it is unclear, what are the characteristics of the distance dependence in online social networks. In order to explore this issue the complete network of the former major Hungarian online social network is analyzed. We find that the distance dependence is weaker for the online social network ties than what was found earlier for phone communication networks. For a further analysis we introduced a coarser granularity: We identified the settlements with the nodes of a network and assigned two kinds of weights to the links between them. When the weights are proportional to the number of contacts we observed weakly formed, but spatially based modules resemble to the borders of macro-regions, the highest level of regional administration in the country. If the weights are defined relative to an uncorrelated null model, the next level of administrative regions, counties are reflected.
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
Relationships of pediatric anthropometrics for CT protocol selection.
Phillips, Grace S; Stanescu, Arta-Luana; Alessio, Adam M
2014-07-01
Determining the optimal CT technique to minimize patient radiation exposure while maintaining diagnostic utility requires patient-specific protocols that are based on patient characteristics. This work develops relationships between different anthropometrics and CT image noise to determine appropriate protocol classification schemes. We measured the image noise in 387 CT examinations of pediatric patients (222 boys, 165 girls) of the chest, abdomen, and pelvis and generated mathematic relationships between image noise and patient lateral and anteroposterior dimensions, age, and weight. At the chest level, lateral distance (ld) across the body is strongly correlated with weight (ld = 0.23 × weight + 16.77; R(2) = 0.93) and is less well correlated with age (ld = 1.10 × age + 17.13; R(2) = 0.84). Similar trends were found for anteroposterior dimensions and at the abdomen level. Across all studies, when acquisition-specific parameters are factored out of the noise, the log of image noise was highly correlated with lateral distance (R(2) = 0.72) and weight (R(2) = 0.72) and was less correlated with age (R(2) = 0.62). Following first-order relationships of image noise and scanner technique, plots were formed to show techniques that could achieve matched noise across the pediatric population. Patient lateral distance and weight are essentially equally effective metrics to base maximum technique settings for pediatric patient-specific protocols. These metrics can also be used to help categorize appropriate reference levels for CT technique and size-specific dose estimates across the pediatric population.
14 CFR 23.75 - Landing distance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Landing distance. 23.75 Section 23.75... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Flight Performance § 23.75 Landing... the landing surface must be determined, for standard temperatures at each weight and altitude within...
Skidder load capacity and fuel consumption HP-41C program
Ross A. Phillips
1983-01-01
This program gives the log weight that the skidder can move and gives fuel consumption either in liters or gallons per turn. Slope of the skid trail, skidder weight, and skid distance must be entered into the program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandersall, K S; Tarver, C M; Garcia, F
Shock initiation experiments on the HMX based explosives LX-10 (95% HMX, 5% Viton by weight) and LX-07 (90% HMX, 10% Viton by weight) were performed to obtain in-situ pressure gauge data, run-distance-to-detonation thresholds, and Ignition and Growth modeling parameters. A 101 mm diameter propellant driven gas gun was utilized to initiate the explosive samples with manganin piezoresistive pressure gauge packages placed between sample slices. The run-distance-to-detonation points on the Pop-plot for these experiments and prior experiments on another HMX based explosive LX LX-04 (85% HMX, 15% Viton by weight) will be shown, discussed, and compared as a function of themore » binder content. This parameter set will provide additional information to ensure accurate code predictions for safety scenarios involving HMX explosives with different percent binder content additions.« less
Direction of Translation and Size of Bacteriophage φX174 Cistrons
Benbow, Robert M.; Mayol, Robert F.; Picchi, Joanna C.; Sinsheimer, Robert L.
1972-01-01
Translation of the bacteriophage φX174 genome follows cistron order D-E-F-G-H-A-B-C. To establish this, the position of a nonsense mutation on the genetic map was compared with the physical size (molecular weight) of the appropriate protein fragment generated in nonpermissive cells. Distances on the φX174 genetic map and distances on a physical map constructed from the molecular weights of φX174 proteins and protein fragments are proportional over most of the genome with the exception of the high recombination region within cistron A. Images PMID:16789133
Muscle changes detected with diffusion-tensor imaging after long-distance running.
Froeling, Martijn; Oudeman, Jos; Strijkers, Gustav J; Maas, Mario; Drost, Maarten R; Nicolay, Klaas; Nederveen, Aart J
2015-02-01
To develop a protocol for diffusion-tensor imaging (DTI) of the complete upper legs and to demonstrate feasibility of detection of subclinical sports-related muscle changes in athletes after strenuous exercise, which remain undetected by using conventional T2-weighted magnetic resonance (MR) imaging with fat suppression. The research was approved by the institutional ethics committee review board, and the volunteers provided written consent before the study. Five male amateur long-distance runners underwent an MR examination (DTI, T1-weighted MR imaging, and T2-weighted MR imaging with fat suppression) of both upper legs 1 week before, 2 days after, and 3 weeks after they participated in a marathon. The tensor eigenvalues (λ1, λ2, and λ3), the mean diffusivity, and the fractional anisotropy (FA) were derived from the DTI data. Data per muscle from the three time-points were compared by using a two-way mixed-design analysis of variance with a Bonferroni posthoc test. The DTI protocol allowed imaging of both complete upper legs with adequate signal-to-noise ratio and within a 20-minute imaging time. After the marathon, T2-weighted MR imaging revealed grade 1 muscle strains in nine of the 180 investigated muscles. The three eigenvalues, mean diffusivity, and FA were significantly increased (P < .05) in the biceps femoris muscle 2 days after running. Mean diffusivity and eigenvalues λ1 and λ2 were significantly (P < .05) increased in the semitendinosus and gracilis muscles 2 days after the marathon. A feasible method for DTI measurements of the upper legs was developed that fully included frequently injured muscles, such as hamstrings, in one single imaging session. This study also revealed changes in DTI parameters that over time were not revealed by qualitative T2-weighted MR imaging with fat suppression. © RSNA, 2014.
Method of manufacturing fibrous hemostatic bandages
Larsen, Gustavo; Spretz, Ruben; Velarde-Ortiz, Raffet
2012-09-04
A method of manufacturing a sturdy and pliable fibrous hemostatic dressing by making fibers that maximally expose surface area per unit weight of active ingredients as a means for aiding in the clot forming process and as a means of minimizing waste of active ingredients. The method uses a rotating object to spin off a liquid biocompatible fiber precursor, which is added at its center. Fibers formed then deposit on a collector located at a distance from the rotating object creating a fiber layer on the collector. An electrical potential difference is maintained between the rotating disk and the collector. Then, a liquid procoagulation species is introduced at the center of the rotating disk such that it spins off the rotating disk and coats the fibers.
NASA Astrophysics Data System (ADS)
Niazi, M. Khalid Khan; Hemminger, Jessica; Kurt, Habibe; Lozanski, Gerard; Gurcan, Metin
2014-03-01
Vascularity represents an important element of tissue/tumor microenvironment and is implicated in tumor growth, metastatic potential and resistence to therapy. Small blood vessels can be visualized using immunohistochemical stains specific to vascular cells. However, currently used manual methods to assess vascular density are poorly reproducible and are at best semi quantitative. Computer based quantitative and objective methods to measure microvessel density are urgently needed to better understand and clinically utilize microvascular density information. We propose a new method to quantify vascularity from images of bone marrow biopsies stained for CD34 vascular lining cells protein as a model. The method starts by automatically segmenting the blood vessels by methods of maxlink thresholding and minimum graph cuts. The segmentation is followed by morphological post-processing to reduce blast and small spurious objects from the bone marrow images. To classify the images into one of the four grades, we extracted 20 features from the segmented blood vessel images. These features include first four moments of the distribution of the area of blood vessels, first four moments of the distribution of 1) the edge weights in the minimum spanning tree of the blood vessels, 2) the shortest distance between blood vessels, 3) the homogeneity of the shortest distance (absolute difference in distance between consecutive blood vessels along the shortest path) between blood vessels and 5) blood vessel orientation. The method was tested on 26 bone marrow biopsy images stained with CD34 IHC stain, which were evaluated by three pathologists. The pathologists took part in this study by quantifying blood vessel density using gestalt assessment in hematopoietic bone marrow portions of bone marrow core biopsies images. To determine the intra-reader variability, each image was graded twice by each pathologist with two-week interval in between their readings. For each image, the ground truth (grade) was acquired through consensus among the three pathologists at the end of the study. A ranking of the features reveals that the fourth moment of the distribution of the area of blood vessels along with the first moment of the distribution of the shortest distance between blood vessels can correctly grade 68.2% of the bone marrow biopsies, while the intra- and inter-reader variability among the pathologists are 66.9% and 40.0%, respectively.
Sauwen, Nicolas; Acou, Marjan; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Huffel, Sabine Van
2017-05-04
Segmentation of gliomas in multi-parametric (MP-)MR images is challenging due to their heterogeneous nature in terms of size, appearance and location. Manual tumor segmentation is a time-consuming task and clinical practice would benefit from (semi-) automated segmentation of the different tumor compartments. We present a semi-automated framework for brain tumor segmentation based on non-negative matrix factorization (NMF) that does not require prior training of the method. L1-regularization is incorporated into the NMF objective function to promote spatial consistency and sparseness of the tissue abundance maps. The pathological sources are initialized through user-defined voxel selection. Knowledge about the spatial location of the selected voxels is combined with tissue adjacency constraints in a post-processing step to enhance segmentation quality. The method is applied to an MP-MRI dataset of 21 high-grade glioma patients, including conventional, perfusion-weighted and diffusion-weighted MRI. To assess the effect of using MP-MRI data and the L1-regularization term, analyses are also run using only conventional MRI and without L1-regularization. Robustness against user input variability is verified by considering the statistical distribution of the segmentation results when repeatedly analyzing each patient's dataset with a different set of random seeding points. Using L1-regularized semi-automated NMF segmentation, mean Dice-scores of 65%, 74 and 80% are found for active tumor, the tumor core and the whole tumor region. Mean Hausdorff distances of 6.1 mm, 7.4 mm and 8.2 mm are found for active tumor, the tumor core and the whole tumor region. Lower Dice-scores and higher Hausdorff distances are found without L1-regularization and when only considering conventional MRI data. Based on the mean Dice-scores and Hausdorff distances, segmentation results are competitive with state-of-the-art in literature. Robust results were found for most patients, although careful voxel selection is mandatory to avoid sub-optimal segmentation.
Using self-organizing maps to classify humpback whale song units and quantify their similarity.
Allen, Jenny A; Murray, Anita; Noad, Michael J; Dunlop, Rebecca A; Garland, Ellen C
2017-10-01
Classification of vocal signals can be undertaken using a wide variety of qualitative and quantitative techniques. Using east Australian humpback whale song from 2002 to 2014, a subset of vocal signals was acoustically measured and then classified using a Self-Organizing Map (SOM). The SOM created (1) an acoustic dictionary of units representing the song's repertoire, and (2) Cartesian distance measurements among all unit types (SOM nodes). Utilizing the SOM dictionary as a guide, additional song recordings from east Australia were rapidly (manually) transcribed. To assess the similarity in song sequences, the Cartesian distance output from the SOM was applied in Levenshtein distance similarity analyses as a weighting factor to better incorporate unit similarity in the calculation (previously a qualitative process). SOMs provide a more robust and repeatable means of categorizing acoustic signals along with a clear quantitative measurement of sound type similarity based on acoustic features. This method can be utilized for a wide variety of acoustic databases especially those containing very large datasets and can be applied across the vocalization research community to help address concerns surrounding inconsistency in manual classification.
Effects of Proximity to Supermarkets on a Randomized Trial Studying Interventions for Obesity
Kleinman, Ken; Melly, Steven J.; Sharifi, Mona; Marshall, Richard; Block, Jason; Cheng, Erika R.; Taveras, Elsie M.
2016-01-01
Objectives. To determine whether proximity to a supermarket modified the effects of an obesity intervention. Methods. We examined 498 children aged 6 to 12 years with a body mass index (BMI) at or above the 95th percentile participating in an obesity trial in Massachusetts in 2011 to 2013. The practice-based interventions included computerized clinician decision support plus family self-guided behavior change or health coaching. Outcomes were 1-year change in BMI z-score, sugar-sweetened beverage intake, and fruit and vegetable intake. We examined distance to the closest supermarket as an effect modifier. Results. Distance to supermarkets was an effect modifier of 1-year change in BMI z-score and fruit and vegetable intake but not sugar-sweetened beverage intake. With each 1-mile shorter distance to a supermarket, intervention participants increased their fruit and vegetable intake by 0.29 servings per day and decreased their BMI z-score by −0.04 units relative to controls. Conclusions. Living closer to a supermarket is associated with greater improvements in fruit and vegetable intake and weight status in an obesity intervention. PMID:26794159
Truck Platooning Evaluations | Transportation Research | NREL
, following distances, and gross vehicle weights. While platooning improved fuel economy at all speeds, travel fleet operational characteristics. Refer to the report for details. Publications The following documents Consumption of Class 8 Vehicles over a Range of Speeds, Following Distances, and Mass Posters Class 8 Tractor
20 CFR 655.103 - Overview of this subpart and definition of terms.
Code of Federal Regulations, 2014 CFR
2014-04-01
... effect wage rate (AEWR). The annual weighted average hourly wage for field and livestock workers.... 1188. Area of intended employment. The geographic area within normal commuting distance of the place of... constitutes a normal commuting distance or normal commuting area, because there may be widely varying factual...
20 CFR 655.103 - Overview of this subpart and definition of terms.
Code of Federal Regulations, 2013 CFR
2013-04-01
... effect wage rate (AEWR). The annual weighted average hourly wage for field and livestock workers.... 1188. Area of intended employment. The geographic area within normal commuting distance of the place of... constitutes a normal commuting distance or normal commuting area, because there may be widely varying factual...
20 CFR 655.103 - Overview of this subpart and definition of terms.
Code of Federal Regulations, 2012 CFR
2012-04-01
... effect wage rate (AEWR). The annual weighted average hourly wage for field and livestock workers.... 1188. Area of intended employment. The geographic area within normal commuting distance of the place of... constitutes a normal commuting distance or normal commuting area, because there may be widely varying factual...
Code of Federal Regulations, 2013 CFR
2013-07-01
... pursuant to 5 U.S.C. 3105. Adverse effect wage rate (AEWR). The annual weighted average hourly wage for... worker subject to 8 U.S.C. 1188. Area of intended employment. The geographic area within normal commuting... measure of distance that constitutes a normal commuting distance or normal commuting area, because there...
Code of Federal Regulations, 2011 CFR
2011-07-01
... pursuant to 5 U.S.C. 3105. Adverse effect wage rate (AEWR). The annual weighted average hourly wage for... worker subject to 8 U.S.C. 1188. Area of intended employment. The geographic area within normal commuting... measure of distance that constitutes a normal commuting distance or normal commuting area, because there...
Code of Federal Regulations, 2014 CFR
2014-07-01
... pursuant to 5 U.S.C. 3105. Adverse effect wage rate (AEWR). The annual weighted average hourly wage for... worker subject to 8 U.S.C. 1188. Area of intended employment. The geographic area within normal commuting... measure of distance that constitutes a normal commuting distance or normal commuting area, because there...
Code of Federal Regulations, 2012 CFR
2012-07-01
... pursuant to 5 U.S.C. 3105. Adverse effect wage rate (AEWR). The annual weighted average hourly wage for... worker subject to 8 U.S.C. 1188. Area of intended employment. The geographic area within normal commuting... measure of distance that constitutes a normal commuting distance or normal commuting area, because there...
Panda, Rashmi; Puhan, N B; Panda, Ganapati
2018-02-01
Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.
Miarka, Bianca; Brito, Ciro José; Bello, Fábio Dal; Amtmann, John
2017-10-01
This study compared motor actions and spatiotemporal changes between weight divisions from Ultimate Fighting Championship (UFC™), conducting a practical application for mixed martial arts (MMA) training. For this, we analyzed 2814 rounds of all weight divisions by motor actions and spatiotemporal changes according actions and time of the Keeping distance, Clinch and Groundwork combat phases. We observed differences between weight divisions in the keeping distance on stand-up combat (p≤0.001; with lower time in Featherweight 131.4s and bantamweight 127.9s) clinch without attack (p≤0.001; with higher timer in Flyweight 11.4s and Half-middleweight 12.6s) and groundwork without attack (p≤0.001; with higher timer in Half-middleweight 0.9s). During keeping distance, half-middleweight presented a higher frequency of Head Strikes Landed (p=0.026; 7±8 times) and attempted (p=0.003; 24±22 times). In clinch actions heavyweight present a higher frequency (p≤0.023) of head strike landed (3±7 times) and attempted (4±9 times) and half-middleweight for body strikes (p≤0.023) landed (2±5 times) and attempted (3±5 times). At the last, during groundwork, Bantamweight present a higher frequency (p≤0.036) of head strikes landed (8±10 times) and attempted (10±13 times) body strikes landed (p≤0.044; 3±5 times) and attempted (3±6 times). This study reveals important point to training and provide a challenge applied referential to the conditioning plains. From the weight divisions differences should be aware of the increase in the frequency of distance actions, especially in light and middleweights. On the Ground, bantamweight can focus on striking and grappling actions than others. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, Zach S.; Suyu, Sherry H.; Treu, Tommaso
2013-05-01
In order to use strong gravitational lens time delays to measure precise and accurate cosmological parameters the effects of mass along the line of sight must be taken into account. We present a method to achieve this by constraining the probability distribution function of the effective line-of-sight convergence {kappa}{sub ext}. The method is based on matching the observed overdensity in the weighted number of galaxies to that found in mock catalogs with {kappa}{sub ext} obtained by ray-tracing through structure formation simulations. We explore weighting schemes based on projected distance, mass, luminosity, and redshift. This additional information reduces the uncertainty ofmore » {kappa}{sub ext} from {sigma}{sub {kappa}} {approx} 0.06 to {approx}0.04 for very overdense LOSs like that of the system B1608+656. For more common LOSs, {sigma}{sub {kappa}} is reduced to {approx}<0.03, corresponding to an uncertainty of {approx}< 3% on distance. This uncertainty has comparable effects on cosmological parameters to that arising from the mass model of the deflector and its immediate environment. Photometric redshifts based on g, r, i and K photometries are sufficient to constrain {kappa}{sub ext} almost as well as with spectroscopic redshifts. As an illustration, we apply our method to the system B1608+656. Our most reliable {kappa}{sub ext} estimator gives {sigma}{sub {kappa}} = 0.047 down from 0.065 using only galaxy counts. Although deeper multiband observations of the field of B1608+656 are necessary to obtain a more precise estimate, we conclude that griK photometry, in addition to spectroscopy to characterize the immediate environment, is an effective way to increase the precision of time-delay cosmography.« less
Gill, Simone V.; Hicks, Gregory E.; Zhang, Yuqing; Niu, Jingbo; Apovian, Caroline M.; White, Daniel K.
2016-01-01
Objective Excess weight is a known risk factor for functional limitation and common in adults with knee osteoarthritis (OA). We asked to what extent high waist circumference was linked with developing difficulty with walking speed and distance over 4 years in adults with or at risk of knee OA. Method Using data from the Osteoarthritis Initiative, we employed WHO categories for Body Mass Index (BMI) and waist circumference (small/medium and large). Difficulty with speed was defined by slow gait: < 1.2 m/s during a 20-meter walk, and difficulty with distance was defined by an inability to walk 400 meters. We calculated risk ratios (RR) to examine the likelihood of developing difficulty with distance and speed using obesity and waist circumference as predictors with RRs adjusted for potential confounders (i.e., age, sex, race, education, physical activity, and OA status). Results Participants with obesity and large waists were 2.2 times more likely to have difficulty with speed at 4 years compared to healthy weight and small/medium waisted participants (Adjusted RR 2.2 [95% Confidence interval (CI) 1.6, 3.1], P < .0001). Participants with obesity and a large waist circumference had 2.4 times the risk of developing the inability to walk 400 meters compared with those with a healthy BMI and small/medium waist circumference (Adjusted RR 0.9 [95% CI 1.6, 3.7], P < .0001). Conclusions Waist circumference may be a main risk factor for developing difficulty with speed in adults with or at risk of knee OA. PMID:27492464
Concordance of Commercial Data Sources for Neighborhood-Effects Studies
Schootman, Mario
2010-01-01
Growing evidence supports a relationship between neighborhood-level characteristics and important health outcomes. One source of neighborhood data includes commercial databases integrated with geographic information systems to measure availability of certain types of businesses or destinations that may have either favorable or adverse effects on health outcomes; however, the quality of these data sources is generally unknown. This study assessed the concordance of two commercial databases for ascertaining the presence, locations, and characteristics of businesses. Businesses in the St. Louis, Missouri area were selected based on their four-digit Standard Industrial Classification (SIC) codes and classified into 14 business categories. Business listings in the two commercial databases were matched by standardized business name within specified distances. Concordance and coverage measures were calculated using capture–recapture methods for all businesses and by business type, with further stratification by census-tract-level population density, percent below poverty, and racial composition. For matched listings, distance between listings and agreement in four-digit SIC code, sales volume, and employee size were calculated. Overall, the percent agreement was 32% between the databases. Concordance and coverage estimates were lowest for health-care facilities and leisure/entertainment businesses; highest for popular walking destinations, eating places, and alcohol/tobacco establishments; and varied somewhat by population density. The mean distance (SD) between matched listings was 108.2 (179.0) m with varying levels of agreement in four-digit SIC (percent agreement = 84.6%), employee size (weighted kappa = 0.63), and sales volume (weighted kappa = 0.04). Researchers should cautiously interpret findings when using these commercial databases to yield measures of the neighborhood environment. PMID:20480397
A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps
Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun
2014-01-01
In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290
A New Distance Metric for Unsupervised Learning of Categorical Data.
Jia, Hong; Cheung, Yiu-Ming; Liu, Jiming
2016-05-01
Distance metric is the basis of many learning algorithms, and its effectiveness usually has a significant influence on the learning results. In general, measuring distance for numerical data is a tractable task, but it could be a nontrivial problem for categorical data sets. This paper, therefore, presents a new distance metric for categorical data based on the characteristics of categorical values. In particular, the distance between two values from one attribute measured by this metric is determined by both the frequency probabilities of these two values and the values of other attributes that have high interdependence with the calculated one. Dynamic attribute weight is further designed to adjust the contribution of each attribute-distance to the distance between the whole data objects. Promising experimental results on different real data sets have shown the effectiveness of the proposed distance metric.
Grey situation group decision-making method based on prospect theory.
Zhang, Na; Fang, Zhigeng; Liu, Xiaqing
2014-01-01
This paper puts forward a grey situation group decision-making method on the basis of prospect theory, in view of the grey situation group decision-making problems that decisions are often made by multiple decision experts and those experts have risk preferences. The method takes the positive and negative ideal situation distance as reference points, defines positive and negative prospect value function, and introduces decision experts' risk preference into grey situation decision-making to make the final decision be more in line with decision experts' psychological behavior. Based on TOPSIS method, this paper determines the weight of each decision expert, sets up comprehensive prospect value matrix for decision experts' evaluation, and finally determines the optimal situation. At last, this paper verifies the effectiveness and feasibility of the method by means of a specific example.
Grey Situation Group Decision-Making Method Based on Prospect Theory
Zhang, Na; Fang, Zhigeng; Liu, Xiaqing
2014-01-01
This paper puts forward a grey situation group decision-making method on the basis of prospect theory, in view of the grey situation group decision-making problems that decisions are often made by multiple decision experts and those experts have risk preferences. The method takes the positive and negative ideal situation distance as reference points, defines positive and negative prospect value function, and introduces decision experts' risk preference into grey situation decision-making to make the final decision be more in line with decision experts' psychological behavior. Based on TOPSIS method, this paper determines the weight of each decision expert, sets up comprehensive prospect value matrix for decision experts' evaluation, and finally determines the optimal situation. At last, this paper verifies the effectiveness and feasibility of the method by means of a specific example. PMID:25197706
Buteau, Stephane; Hatzopoulou, Marianne; Crouse, Dan L; Smargiassi, Audrey; Burnett, Richard T; Logan, Travis; Cavellin, Laure Deville; Goldberg, Mark S
2017-07-01
In previous studies investigating the short-term health effects of ambient air pollution the exposure metric that is often used is the daily average across monitors, thus assuming that all individuals have the same daily exposure. Studies that incorporate space-time exposures of individuals are essential to further our understanding of the short-term health effects of ambient air pollution. As part of a longitudinal cohort study of the acute effects of air pollution that incorporated subject-specific information and medical histories of subjects throughout the follow-up, the purpose of this study was to develop and compare different prediction models using data from fixed-site monitors and other monitoring campaigns to estimate daily, spatially-resolved concentrations of ozone (O 3 ) and nitrogen dioxide (NO 2 ) of participants' residences in Montreal, 1991-2002. We used the following methods to predict spatially-resolved daily concentrations of O 3 and NO 2 for each geographic region in Montreal (defined by three-character postal code areas): (1) assigning concentrations from the nearest monitor; (2) spatial interpolation using inverse-distance weighting; (3) back-extrapolation from a land-use regression model from a dense monitoring survey, and; (4) a combination of a land-use and Bayesian maximum entropy model. We used a variety of indices of agreement to compare estimates of exposure assigned from the different methods, notably scatterplots of pairwise predictions, distribution of differences and computation of the absolute agreement intraclass correlation (ICC). For each pairwise prediction, we also produced maps of the ICCs by these regions indicating the spatial variability in the degree of agreement. We found some substantial differences in agreement across pairs of methods in daily mean predicted concentrations of O 3 and NO 2 . On a given day and postal code area the difference in the concentration assigned could be as high as 131ppb for O 3 and 108ppb for NO 2 . For both pollutants, better agreement was found between predictions from the nearest monitor and the inverse-distance weighting interpolation methods, with ICCs of 0.89 (95% confidence interval (CI): 0.89, 0.89) for O 3 and 0.81 (95%CI: 0.80, 0.81) for NO 2 , respectively. For this pair of methods the maximum difference on a given day and postal code area was 36ppb for O 3 and 74ppb for NO 2 . The back-extrapolation method showed a higher degree of disagreement with the nearest monitor approach, inverse-distance weighting interpolation, and the Bayesian maximum entropy model, which were strongly constrained by the sparse monitoring network. The maps showed that the patterns of agreement differed across the postal code areas and the variability depended on the pair of methods compared and the pollutants. For O 3 , but not NO 2 , postal areas showing greater disagreement were mostly located near the city centre and along highways, especially in maps involving the back-extrapolation method. In view of the substantial differences in daily concentrations of O 3 and NO 2 predicted by the different methods, we suggest that analyses of the health effects from air pollution should make use of multiple exposure assessment methods. Although we cannot make any recommendations as to which is the most valid method, models that make use of higher spatially resolved data, such as from dense exposure surveys or from high spatial resolution satellite data, likely provide the most valid estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Fast Voronoi Diagrams and Offsets on Triangulated Surfaces
2000-01-01
118 (1996), 499-508. 14. Malladi , R ., and J. A. Sethian, An O(N log N) algorithm for shape mod- eling, Proceedings of National Academy of Sciences...rights of reproduction in any form reserved. 194 R . Kimmel and J. A. Sethian the Osher-Sethian level set method [16,20], which grew out of Sethian’s...where .F(x, y) : R 2 -* R + is a function that defines a weight for each point in the domain. The distance map T(x, y) from a given point Po assigns
Macro-level safety analysis of pedestrian crashes in Shanghai, China.
Wang, Xuesong; Yang, Junguang; Lee, Chris; Ji, Zhuoran; You, Shikai
2016-11-01
Pedestrian safety has become one of the most important issues in the field of traffic safety. This study aims at investigating the association between pedestrian crash frequency and various predictor variables including roadway, socio-economic, and land-use features. The relationships were modeled using the data from 263 Traffic Analysis Zones (TAZs) within the urban area of Shanghai - the largest city in China. Since spatial correlation exists among the zonal-level data, Bayesian Conditional Autoregressive (CAR) models with seven different spatial weight features (i.e. (a) 0-1 first order, adjacency-based, (b) common boundary-length-based, (c) geometric centroid-distance-based, (d) crash-weighted centroid-distance-based, (e) land use type, adjacency-based, (f) land use intensity, adjacency-based, and (g) geometric centroid-distance-order) were developed to characterize the spatial correlations among TAZs. Model results indicated that the geometric centroid-distance-order spatial weight feature, which was introduced in macro-level safety analysis for the first time, outperformed all the other spatial weight features. Population was used as the surrogate for pedestrian exposure, and had a positive effect on pedestrian crashes. Other significant factors included length of major arterials, length of minor arterials, road density, average intersection spacing, percentage of 3-legged intersections, and area of TAZ. Pedestrian crashes were higher in TAZs with medium land use intensity than in TAZs with low and high land use intensity. Thus, higher priority should be given to TAZs with medium land use intensity to improve pedestrian safety. Overall, these findings can help transportation planners and managers understand the characteristics of pedestrian crashes and improve pedestrian safety. Copyright © 2016 Elsevier Ltd. All rights reserved.
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad R.; Pompili, Dario; Soltanian-Zadeh, Hamid
2015-01-01
Hippocampus segmentation is a key step in the evaluation of mesial Temporal Lobe Epilepsy (mTLE) by MR images. Several automated segmentation methods have been introduced for medical image segmentation. Because of multiple edges, missing boundaries, and shape changing along its longitudinal axis, manual outlining still remains the benchmark for hippocampus segmentation, which however, is impractical for large datasets due to time constraints. In this study, four automatic methods, namely FreeSurfer, Hammer, Automatic Brain Structure Segmentation (ABSS), and LocalInfo segmentation, are evaluated to find the most accurate and applicable method that resembles the bench-mark of hippocampus. Results from these four methods are compared against those obtained using manual segmentation for T1-weighted images of 157 symptomatic mTLE patients. For performance evaluation of automatic segmentation, Dice coefficient, Hausdorff distance, Precision, and Root Mean Square (RMS) distance are extracted and compared. Among these four automated methods, ABSS generates the most accurate results and the reproducibility is more similar to expert manual outlining by statistical validation. By considering p-value<0.05, the results of performance measurement for ABSS reveal that, Dice is 4%, 13%, and 17% higher, Hausdorff is 23%, 87%, and 70% lower, precision is 5%, -5%, and 12% higher, and RMS is 19%, 62%, and 65% lower compared to LocalInfo, FreeSurfer, and Hammer, respectively. PMID:25571043
Real-time stereographic display of volumetric datasets in radiology
NASA Astrophysics Data System (ADS)
Wang, Xiao Hui; Maitz, Glenn S.; Leader, J. K.; Good, Walter F.
2006-02-01
A workstation for testing the efficacy of stereographic displays for applications in radiology has been developed, and is currently being tested on lung CT exams acquired for lung cancer screening. The system exploits pre-staged rendering to achieve real-time dynamic display of slabs, where slab thickness, axial position, rendering method, brightness and contrast are interactively controlled by viewers. Stereo presentation is achieved by use of either frame-swapping images or cross-polarizing images. The system enables viewers to toggle between alternative renderings such as one using distance-weighted ray casting by maximum-intensity-projection, which is optimal for detection of small features in many cases, and ray casting by distance-weighted averaging, for characterizing features once detected. A reporting mechanism is provided which allows viewers to use a stereo cursor to measure and mark the 3D locations of specific features of interest, after which a pop-up dialog box appears for entering findings. The system's impact on performance is being tested on chest CT exams for lung cancer screening. Radiologists' subjective assessments have been solicited for other kinds of 3D exams (e.g., breast MRI) and their responses have been positive. Objective estimates of changes in performance and efficiency, however, must await the conclusion of our study.
Goldberg, Kenneth A; Yashchuk, Valeriy V
2016-05-01
For glancing-incidence optical systems, such as short-wavelength optics used for nano-focusing, incorporating physical factors in the calculations used for shape optimization can improve performance. Wavefront metrology, including the measurement of a mirror's shape or slope, is routinely used as input for mirror figure optimization on mirrors that can be bent, actuated, positioned, or aligned. Modeling shows that when the incident power distribution, distance from focus, angle of incidence, and the spatially varying reflectivity are included in the optimization, higher Strehl ratios can be achieved. Following the works of Maréchal and Mahajan, optimization of the Strehl ratio (for peak intensity with a coherently illuminated system) occurs when the expectation value of the phase error's variance is minimized. We describe an optimization procedure based on regression analysis that incorporates these physical parameters. This approach is suitable for coherently illuminated systems of nearly diffraction-limited quality. Mathematically, this work is an enhancement of the methods commonly applied for ex situ alignment based on uniform weighting of all points on the surface (or a sub-region of the surface). It follows a similar approach to the optimization of apodized and non-uniformly illuminated optical systems. Significantly, it reaches a different conclusion than a more recent approach based on minimization of focal plane ray errors.
Cochlea segmentation using iterated random walks with shape prior
NASA Astrophysics Data System (ADS)
Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Vera, Sergio; Ceresa, Mario; González Ballester, Miguel Ángel
2016-03-01
Cochlear implants can restore hearing to deaf or partially deaf patients. In order to plan the intervention, a model from high resolution µCT images is to be built from accurate cochlea segmentations and then, adapted to a patient-specific model. Thus, a precise segmentation is required to build such a model. We propose a new framework for segmentation of µCT cochlear images using random walks where a region term is combined with a distance shape prior weighted by a confidence map to adjust its influence according to the strength of the image contour. Then, the region term can take advantage of the high contrast between the background and foreground and the distance prior guides the segmentation to the exterior of the cochlea as well as to less contrasted regions inside the cochlea. Finally, a refinement is performed preserving the topology using a topological method and an error control map to prevent boundary leakage. We tested the proposed approach with 10 datasets and compared it with the latest techniques with random walks and priors. The experiments suggest that this method gives promising results for cochlea segmentation.
Camera Calibration with Radial Variance Component Estimation
NASA Astrophysics Data System (ADS)
Mélykuti, B.; Kruck, E. J.
2014-11-01
Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.
The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm.
Han, Gaining; Fu, Weiping; Wang, Wen
2016-01-01
In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability.
Steganalysis based on reducing the differences of image statistical characteristics
NASA Astrophysics Data System (ADS)
Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao
2018-04-01
Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.
The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm
Han, Gaining; Fu, Weiping; Wang, Wen
2016-01-01
In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability. PMID:26880881
Stand up time in tunnel base on rock mass rating Bieniawski 1989
NASA Astrophysics Data System (ADS)
Nata, Refky Adi; M. S., Murad
2017-11-01
RMR (Rock Mass Rating), or also known as the geo mechanics classification has been modified and made as the International Standard in determination of rock mass weighting. Rock Mass Rating Classification has been developed by Bieniawski (since 1973, 1976, and 1989). The goals of this research are investigate the class of rocks base on classification rock mass rating Bieniawski 1989, to investigate the long mass of the establishment rocks, and also to investigate the distance of the opening tunnel without a support especially in underground mine. On the research measuring: strength intact rock material, RQD (Rock Quality Designation), spacing of discontinuities, condition of discontinuities, groundwater, and also adjustment for discontinuity orientations. On testing samples in the laboratory for coal obtained strong press UCS of 30.583 MPa. Based on the classification according to Bieniawski has a weight of 4. As for silt stone obtained strong press of 35.749 MPa, gained weight also by 4. From the results of the measurements obtained for coal RQD value average 97.38 %, so it has a weight of 20. While in siltstone RQD value average 90.10 % so it has weight 20 also. On the coal the average distance measured in field is 22.6 cm so as to obtain a weight of 10, while for siltstone has an average is 148 cm, so it has weight = 15. Presistence in the field vary, on coal = 57.28 cm, so it has weight is 6 and persistence on siltstone 47 cm then does it weight to 6. Base on table Rock Mass Rating according to Bieniawski 1989, aperture on coal = 0.41 mm. That is located in the range 0.1-1 mm, so it has weight is 4. Besides that, for the siltstone aperture = 21.43 mm. That is located in the range > 5 mm, so the weight = 0. Roughness condition in coal and siltstone classified into rough so it has weight 5. Infilling condition in coal and siltstone classified into none so it has weight 6. Weathering condition in coal and siltstone classified into highly weathered so it has weight 1. Groundwater condition in coal classified into dripping so it has weight 4. and siltstone classified into completely dry so it has weight 15. Discontinuity orientation in coal parallel axis of the tunnel. The range is 251°-290° so unfavorable. It has weight = -10. In siltstone also discontinuity parallel axis of the tunnel. The range located between 241°-300°. Base on weighting parameters according to Bieniawski 1989, concluded for rocks are there in class II with value is 62, and able to establishment until 6 months. For the distance of the opening tunnel without a support as far as 8 meters.
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.
Blöchliger, Nicolas; Caflisch, Amedeo; Vitalis, Andreas
2015-11-10
Data mining techniques depend strongly on how the data are represented and how distance between samples is measured. High-dimensional data often contain a large number of irrelevant dimensions (features) for a given query. These features act as noise and obfuscate relevant information. Unsupervised approaches to mine such data require distance measures that can account for feature relevance. Molecular dynamics simulations produce high-dimensional data sets describing molecules observed in time. Here, we propose to globally or locally weight simulation features based on effective rates. This emphasizes, in a data-driven manner, slow degrees of freedom that often report on the metastable states sampled by the molecular system. We couple this idea to several unsupervised learning protocols. Our approach unmasks slow side chain dynamics within the native state of a miniprotein and reveals additional metastable conformations of a protein. The approach can be combined with most algorithms for clustering or dimensionality reduction.
Peurala, Sinikka H; Tarkka, Ina M; Pitkänen, Kauko; Sivenius, Juhani
2005-08-01
To compare body weight-supported exercise on a gait trainer with walking exercise overground. Randomized controlled trial. Rehabilitation hospital. Forty-five ambulatory patients with chronic stroke. Patients were randomized to 3 groups: (1) gait trainer exercise with functional electric stimulation (GTstim), (2) gait trainer exercise without stimulation (GT), and (3) walking overground (WALK). All patients practiced gait for 15 sessions during 3 weeks (each session, 20 min), and they received additional physiotherapy 55 minutes daily. Ten-meter walk test (10MWT), six-minute walk test (6MWT), lower-limb spasticity and muscle force, postural sway tests, Modified Motor Assessment Scale (MMAS), and FIM instrument scores were recorded before, during, and after the rehabilitation and at 6 months follow-up. The mean walking distance using the gait trainer was 6900+/-1200 m in the GTstim group and 6500+/-1700 m in GT group. In the WALK group, the distance was 4800+/-2800 m, which was less than the walking distance obtained in the GTstim group (P=.027). The body-weight support was individually reduced from 30% to 9% of the body weight over the course of the program. In the pooled 45 patients, the 10MWT (P<.001), 6MWT (P<.001), MMAS (P<.001), dynamic balance test time (P<.001), and test trip (P=.005) scores improved; however, no differences were found between the groups. Both the body weight-supported training and walking exercise training programs resulted in faster gait after the intensive rehabilitation program. Patients' motor performance remained improved at the follow-up.
Determining H {sub 0} with Bayesian hyper-parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardona, Wilmar; Kunz, Martin; Pettorino, Valeria, E-mail: wilmar.cardona@unige.ch, E-mail: Martin.Kunz@unige.ch, E-mail: valeria.pettorino@thphys.uni-heidelberg.de
We re-analyse recent Cepheid data to estimate the Hubble parameter H {sub 0} by using Bayesian hyper-parameters (HPs). We consider the two data sets from Riess et al. 2011 and 2016 (labelled R11 and R16, with R11 containing less than half the data of R16) and include the available anchor distances (megamaser system NGC4258, detached eclipsing binary distances to LMC and M31, and MW Cepheids with parallaxes), use a weak metallicity prior and no period cut for Cepheids. We find that part of the R11 data is down-weighted by the HPs but that R16 is mostly consistent with expectations formore » a Gaussian distribution, meaning that there is no need to down-weight the R16 data set. For R16, we find a value of H {sub 0} = 73.75 ± 2.11 km s{sup −1} Mpc{sup −1} if we use HPs for all data points (including Cepheid stars, supernovae type Ia, and the available anchor distances), which is about 2.6 σ larger than the Planck 2015 value of H {sub 0} = 67.81 ± 0.92 km s{sup −1} Mpc{sup −1} and about 3.1 σ larger than the updated Planck 2016 value 66.93 ± 0.62 km s{sup −1} Mpc{sup −1}. If we perfom a standard χ{sup 2} analysis as in R16, we find H {sub 0} = 73.46 ± 1.40 (stat) km s{sup −1} Mpc{sup −1}. We test the effect of different assumptions, and find that the choice of anchor distances affects the final value significantly. If we exclude the Milky Way from the anchors, then the value of H {sub 0} decreases. We find however no evident reason to exclude the MW data. The HP method used here avoids subjective rejection criteria for outliers and offers a way to test datasets for unknown systematics.« less
SU-F-I-40: Impact of Scan Length On Patient Dose in Abdomen/pelvis CT Diagnosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, I; Song, J; Kim, K
Purpose: To analysis the impact of scan length on patient doses in abdomen/pelvis CT diagnosis of each hospital. Methods: Scan length of 7 hospitals from abdomen/pelvis CT diagnosis was surveyed in Korea. Surveyed scan lengths were additional distance above diaphragm and distance below pubic symphysis except for standard scan range between diaphragm and pubic symphysis. Patient dose was estimated for adult male and female according to scan length of each hospital. CT-Expo was used to estimate the patient dose under identical equipment settings (120 kVp, 100 mAs, 10 mm collimation width, etc.) except scan length. Effective dose was calculated bymore » using tissue weighting factor of ICRP 103 recommendation. Increase rate of effective dose was calculated comparing with effective dose of standard scan range Results: Scan lengths of abdomen/pelvis CT diagnosis of each hospital were different. Also effective dose was increased with increasing the scan length. Generally increasing the distance above diaphragm caused increase of effective dose of male and female, but increasing the distance below pubic symphysis caused increase of effective dose of male. Conclusion: We estimated the patient dose according to scan length of each hospital in abdomen/pelvis CT diagnosis. Effective dose was increased by increasing the scan length because dose of organs with high tissue weighting factor such as lung, breast, testis were increased. Scan length is important factor on patient dose in CT diagnosis. If radiologic technologist interested in patient dose, decreasing the unnecessary scan length will decrease the risk of patients from radiation. This research was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HI13C0004).« less
NASA Astrophysics Data System (ADS)
Chun, Seon
This research was conducted to explore children's construction of protologic (foreshadowing of operations) in the context of experience with balance mobiles in a constructivist setting and to explore the usefulness of making mobiles in promoting children's development of the concept of balance. The statement of the problem is (a) Can constructivist principles of cognitive development be used to understand children's progress in the course of educational activities involving balance? If so, how? What does the progressive construction of notions about balance look like in children's behaviors? and (b) Does children's understanding of balance improve after experimenting with making mobiles? The participants in this study were 10 first grade children and 12 third grade children from a public elementary laboratory school located in Cedar Falls, Iowa. The pretest and posttest used a primary balance scale and a beam balance. Making mobiles was used as the intervention. The research of Piaget, Kamii, and Parrat-Dayan (1974/1980) and Inhelder and Piaget (1955/1958) were used as the basic framework for the pretest and posttest. All interviews and the dialogues during the tests and making mobiles were video-ecorded and transcribed for analysis. Evidence of compensation and reversibility, coherence, coordination, and contradiction were assessed in children's reasoning during intervention activities using operational definitions developed by Jean Piaget. Before the intervention, all children had an idea that weight impacts balance, 13 out of 22 children had the idea that distance from the fulcrum impacts balance, and 6 out of 22 children considered weight and distance at the same time. After the intervention, all children maintained the idea that weight is related to balance but more children, 16 out of 22, had the idea that distance is related to balance; and 6 children among the 16 children considered weight and distance at the same time. Through the three intervention activities, more children showed consistently their belief that the higher side needs more weight to making bars balance and the understanding of the idea that distance is related to make bars balance. Nine children experienced a "Eureka" moment, that is, they had a sudden insight about how to make bars of mobile balance or connected their prior experience to the current situation.
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.
2013-12-15
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less
NASA Astrophysics Data System (ADS)
Buchari; Tarigan, U.; Ambarita, M. B.
2018-02-01
PT. XYZ is a wood processing company which produce semi-finished wood with production system is make to order. In the production process, it can be seen that the production line is not balanced. The imbalance of the production line is caused by the difference in cycle time between work stations. In addition, there are other issues, namely the existence of material flow pattern is irregular so it resulted in the backtracking and displacement distance away. This study aimed to obtain the allocation of work elements to specific work stations and propose an improvement of the production layout based on the result of improvements in the line balancing. The method used in the balancing is Ranked Positional Weight (RPW) or also known as Helgeson Birnie method. While the methods used in the improvement of the layout is the method of Systematic Layout Planning (SLP). By using Ranked Positional Weight (RPW) obtained increase in line efficiency becomes 84,86% and decreased balance delay becomes 15,14%. Repairing the layout using the method of Systematic Layout Planning (SLP) also give good results with a reduction in path length becomes 133,82 meters from 213,09 meters previously or a decrease of 37.2%.
Darekar, Anuja; Lamontagne, Anouk; Fung, Joyce
2015-04-01
Circumvention around an obstacle entails a dynamic interaction with the obstacle to maintain a safe clearance. We used a novel mathematical interpolation method based on the modified Shepard's method of Inverse Distance Weighting to compute dynamic clearance that reflected this interaction as well as minimal clearance. This proof-of-principle study included seven young healthy, four post-stroke and four healthy age-matched individuals. A virtual environment designed to assess obstacle circumvention was used to administer a locomotor (walking) and a perceptuo-motor (navigation with a joystick) task. In both tasks, participants were asked to navigate towards a target while avoiding collision with a moving obstacle that approached from either head-on, or 30° left or right. Among young individuals, dynamic clearance did not differ significantly between obstacle approach directions in both tasks. Post-stroke individuals maintained larger and smaller dynamic clearance during the locomotor and the perceptuo-motor task respectively as compared to age-matched controls. Dynamic clearance was larger than minimal distance from the obstacle irrespective of the group, task and obstacle approach direction. Also, in contrast to minimal distance, dynamic clearance can respond differently to different avoidance behaviors. Such a measure can be beneficial in contrasting obstacle avoidance behaviors in different populations with mobility problems. Copyright © 2015 Elsevier B.V. All rights reserved.
Watkins, Stephanie; Jonsson-Funk, Michele; Brookhart, M Alan; Rosenberg, Steven A; O'Shea, T Michael; Daniels, Julie
2014-05-01
Children born very low birth weight (VLBW) are at an increased risk of delayed development of motor skills. Physical and occupational therapy services may reduce this risk. Among VLBW children, we evaluated whether receipt of physical or occupational therapy services between 9 months and 2 years of age is associated with improved preschool age motor ability. Using data from the Early Childhood Longitudinal Study Birth Cohort we estimated the association between receipt of therapy and the following preschool motor milestones: skipping eight consecutive steps, hopping five times, standing on one leg for 10 seconds, walking backwards six steps on a line, and jumping distance. We used propensity score methods to adjust for differences in baseline characteristics between children who did and did not receive physical or occupational therapy, since children receiving therapy may be at higher risk of impairment. We applied propensity score weights and modeled the estimated effect of therapy on the distance that the child jumped using linear regression. We modeled all other end points using logistic regression. Treated VLBW children were 1.70 times as likely to skip eight steps (RR 1.70, 95 % CI 0.84, 3.44) compared to the untreated group and 30 % more likely to walk six steps backwards (RR 1.30, 95 % CI 0.63, 2.71), although these differences were not statistically significant. We found little effect of therapy on other endpoints. Providing therapy to VLBW children during early childhood may improve select preschool motor skills involving complex motor planning.
Uncertainty in the relationship between criteria pollutants and low birth weight in Chicago
NASA Astrophysics Data System (ADS)
Kumar, Naresh
2012-03-01
Using the data on all live births (˜400,000) and criteria pollutants from the Chicago Metropolitan Statistical Area (MSA) between 2000 and 2004, this paper empirically demonstrates how mismatches in the spatiotemporal scales of health and air pollution data can result in inconsistency and uncertainty in the linkages between air pollution and birth outcomes. This paper suggests that the risks of low birth weight associated with air pollution exposure changes significantly as the distance interval (around the monitoring stations) used for exposure estimation changes. For example, when the analysis was restricted within 3 miles distance of the monitoring stations the odds of LBW (births <2500 g) increased by a factor of 1.045 (±0.0285 95% CI) with a unit increase in the average daily exposure to PM10 (in μg m-3) during the gestation period; the value dropped to 1.028 when the analysis was restricted within 6 miles distance of air pollution monitoring stations. The effect of PM10 exposure on LBW became null when controlled for confounders. But PM2.5 exposure showed a significant association with low birth weight when controlled for confounders. These results must be interpreted with caution, because the distance to monitoring station does not influence the risks of adverse birth outcomes, but uncertainty in exposure increases with the increase in distance from the monitoring stations, especially for coarse particles such as PM10 that settle with gravity within short distance and time interval. The results of this paper have important implications for the research design of environmental epidemiological studies, and the way air pollution (and potentially other environmental) and health data are collocated to compute exposure. While this paper challenges the findings of pervious epidemiological studies that have relied on coarse resolution air pollution data (such as county level aggregated data), the paper also calls for time-space resolved estimate of air pollution to minimize uncertainty in exposure estimation.
NASA Astrophysics Data System (ADS)
Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah
2014-05-01
Flood is one of the most devastating natural disasters that occur frequently in Terengganu, Malaysia. Recently, ensemble based techniques are getting extremely popular in flood modeling. In this paper, weights-of-evidence (WoE) model was utilized first, to assess the impact of classes of each conditioning factor on flooding through bivariate statistical analysis (BSA). Then, these factors were reclassified using the acquired weights and entered into the support vector machine (SVM) model to evaluate the correlation between flood occurrence and each conditioning factor. Through this integration, the weak point of WoE can be solved and the performance of the SVM will be enhanced. The spatial database included flood inventory, slope, stream power index (SPI), topographic wetness index (TWI), altitude, curvature, distance from the river, geology, rainfall, land use/cover (LULC), and soil type. Four kernel types of SVM (linear kernel (LN), polynomial kernel (PL), radial basis function kernel (RBF), and sigmoid kernel (SIG)) were used to investigate the performance of each kernel type. The efficiency of the new ensemble WoE and SVM method was tested using area under curve (AUC) which measured the prediction and success rates. The validation results proved the strength and efficiency of the ensemble method over the individual methods. The best results were obtained from RBF kernel when compared with the other kernel types. Success rate and prediction rate for ensemble WoE and RBF-SVM method were 96.48% and 95.67% respectively. The proposed ensemble flood susceptibility mapping method could assist researchers and local governments in flood mitigation strategies.
Heinen, Silke; Weinhart, Marie
2017-03-07
For a meaningful correlation of surface coatings with their respective biological response reproducible coating procedures, well-defined surface coatings, and thorough surface characterization with respect to layer thickness and grafting density are indispensable. The same applies to polymeric monolayer coatings which are intended to be used for, e.g., fundamental studies on the volume phase transition of surface end-tethered thermoresponsive polymer chains. Planar gold surfaces are frequently used as model substrates, since they allow a variety of straightforward surface characterization methods. Herein we present reproducible grafting-to procedures performed with thermoresponsive poly(glycidyl ether) copolymers composed of glycidyl methyl ether (GME) and ethyl glycidyl ether (EGE). The copolymers feature different molecular weights (2 kDa, 9 kDa, 24 kDa) and are equipped with varying sulfur-containing anchor groups in order to achieve adjustable grafting densities on gold surfaces and hence control the tethered polymers' chain conformation. We determined "wet" and "dry" thicknesses of these coatings by QCM-D and ellipsometry measurements and deduced anchor distances and degrees of chain overlap of the polymer chains assembled on gold. Grafting under cloud point conditions allowed for higher degrees of chain overlap compared to grafting from a good solvent like ethanol, independent of the used sulfur-containing anchor group for polymers with low (2 kDa) and medium (9 kDa) molecular weights. By contrast, the achieved grafting densities and thus chain overlaps of surface-tethered polymers with high (24 kDa) molecular weights were identical for both grafting methods. Monolayers prepared from an ethanolic solution of poly(glycidyl ether)s equipped with sterically demanding disulfide-containing anchors revealed the lowest degrees of chain overlap. The ratio of the radius of gyration to the anchor distance (2 R g /l) of the latter coating was found to be lower than 1.4, indicating that the assembly was rather in the mushroom-like than in the brush regime. Polymer chains with thiol-containing anchors of different alkyl chain lengths (C 11 SH vs C 4 SH) formed assemblies with comparable degrees of chain overlap with 2 R g /l values above 1.4 and are thus in the brush regime. Molecular weights influenced the achievable degree of chain overlap on the surface. Coatings prepared with the medium molecular weight polymer (9 kDa) resulted in the highest chain packing density. Control of grafting density and thus chain overlap in different regimes (brush vs mushroom) on planar gold substrates are attainable for monolayer coatings with poly(GME-ran-EGE) by adjusting the polymer's molecular weight and anchor group as well as the conditions for the grafting-to procedure.
Wang, Zheng-Xin; Li, Dan-Dan; Zheng, Hong-Hao
2018-01-30
In China's industrialization process, the effective regulation of energy and environment can promote the positive externality of energy consumption while reducing negative externality, which is an important means for realizing the sustainable development of an economic society. The study puts forward an improved technique for order preference by similarity to an ideal solution based on entropy weight and Mahalanobis distance (briefly referred as E-M-TOPSIS). The performance of the approach was verified to be satisfactory. By separately using traditional and improved TOPSIS methods, the study carried out the empirical appraisals on the external performance of China's energy regulation during 1999~2015. The results show that the correlation between the performance indexes causes the significant difference between the appraisal results of E-M-TOPSIS and traditional TOPSIS. The E-M-TOPSIS takes the correlation between indexes into account and generally softens the closeness degree compared with traditional TOPSIS. Moreover, it makes the relative closeness degree fluctuate within a small-amplitude. The results conform to the practical condition of China's energy regulation and therefore the E-M-TOPSIS is favorably applicable for the external performance appraisal of energy regulation. Additionally, the external economic performance and social responsibility performance (including environmental and energy safety performances) based on the E-M-TOPSIS exhibit significantly different fluctuation trends. The external economic performance dramatically fluctuates with a larger fluctuation amplitude, while the social responsibility performance exhibits a relatively stable interval fluctuation. This indicates that compared to the social responsibility performance, the fluctuation of external economic performance is more sensitive to energy regulation.
40 CFR 1065.310 - Torque calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... force is measured. The lever arm must be perpendicular to gravity (i.e., horizontal), and it must be... known distance along a lever arm. Make sure the weights' lever arm is perpendicular to gravity (i.e... Earth's gravity, as described in § 1065.630. Calculate the reference torque as the weights' reference...
Anthropometric Comparisons between Body Measurements of Men and Women
1988-06-01
racial groups. Boys and girls of four ethnic groups (N=637) were studied. Previous results in this area have indicated that the "best" metric...used for Coverall/Flightsuit Analysis 58 1. WEIGHT: weight of subject wearing panties and bra (not pictured). 2. STATURE: vertical distance from floor to
Zarco-Perello, Salvador; Simões, Nuno
2017-01-01
Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ , except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use.
Simões, Nuno
2017-01-01
Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ, except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use. PMID:29204321
Standardization of methods of expressing lengths and weights of fish
Hile, Ralph
1948-01-01
Fishery workers in the United States and Canada are unable to think readily in terms of the metric system of weights and measurements. Even long experience does not make it possible to form a clear idea as to the actual size of fish for which lengths and weights are given in metric units, without first converting to the English system. A more general adoption of the English system of weights and measurements in fishery work is recommended. The use of English units exclusively is suggested for articles of a popular or semi-popular nature, but in more formal publications the key information, at least, should be recorded in both systems. In highly technical papers metric units alone may prove satisfactory. Agreement is also lacking as to which length measurement of fish is suited best for uniform adoption. The total length is recommended here for the reason that it is the only measurement that includes all of the fish. This length is defined as the distance from the tip of the head (jaws closed) to the tip of the tail with the lobes compressed so as to give the maximum possible measurement.
Biomechanical Analysis of the Closed Kinetic Chain Upper-Extremity Stability Test.
Tucci, Helga T; Felicio, Lilian R; McQuade, Kevin J; Bevilaqua-Grossi, Debora; Camarini, Paula Maria Ferreira; Oliveira, Anamaria S
2017-01-01
The closed kinetic chain upper-extremity stability (CKCUES) test is a functional test for the upper extremity performed in the push-up position, where individuals support their body weight on 1 hand placed on the ground and swing the opposite hand until touching the hand on the ground, then switch hands and repeat the process as fast as possible for 15 s. To study scapular kinematic and kinetic measures during the CKCUES test for 3 different distances between hands. Experimental. Laboratory. 30 healthy individuals (15 male, 15 female). Participants performed 3 repetitions of the test at 3 distance conditions: original (36 in), interacromial, and 150% interacromial distance between hands. Participants completed a questionnaire on pain intensity and perceived exertion before and after the procedures. Scapular internal/external rotation, upward/downward rotation, and posterior/anterior tilting kinematics and kinetic data on maximum force and time to maximum force were measured bilaterally in all participants. Percentage of body weight on upper extremities was calculated. Data analyses were based on the total numbers of hand touches performed for each distance condition, and scapular kinematics and kinetic values were averaged over the 3 trials. Scapular kinematics, maximum force, and time to maximum force were compared for the 3 distance conditions within each gender. Significance level was set at α = .05. Scapular internal rotation, posterior tilting, and upward rotation were significantly greater in the dominant side for both genders. Scapular upward rotation was significantly greater in original distance than interacromial distance in swing phase. Time to maximum force in women was significantly greater in the dominant side. CKCUES test kinematic and kinetic measures were not different among 3 conditions based on distance between hands. However, the test might not be suitable for initial or mild-level rehabilitation due to its challenging requirements.
kWIP: The k-mer weighted inner product, a de novo estimator of genetic similarity.
Murray, Kevin D; Webers, Christfried; Ong, Cheng Soon; Borevitz, Justin; Warthmann, Norman
2017-09-01
Modern genomics techniques generate overwhelming quantities of data. Extracting population genetic variation demands computationally efficient methods to determine genetic relatedness between individuals (or "samples") in an unbiased manner, preferably de novo. Rapid estimation of genetic relatedness directly from sequencing data has the potential to overcome reference genome bias, and to verify that individuals belong to the correct genetic lineage before conclusions are drawn using mislabelled, or misidentified samples. We present the k-mer Weighted Inner Product (kWIP), an assembly-, and alignment-free estimator of genetic similarity. kWIP combines a probabilistic data structure with a novel metric, the weighted inner product (WIP), to efficiently calculate pairwise similarity between sequencing runs from their k-mer counts. It produces a distance matrix, which can then be further analysed and visualised. Our method does not require prior knowledge of the underlying genomes and applications include establishing sample identity and detecting mix-up, non-obvious genomic variation, and population structure. We show that kWIP can reconstruct the true relatedness between samples from simulated populations. By re-analysing several published datasets we show that our results are consistent with marker-based analyses. kWIP is written in C++, licensed under the GNU GPL, and is available from https://github.com/kdmurray91/kwip.
Discrimination against Obese Exercise Clients: An Experimental Study of Personal Trainers.
Fontana, Fabio; Bopes, Jonathan; Bendixen, Seth; Speed, Tyler; George, Megan; Mack, Mick
2018-01-01
The aim of the study was to compare exercise recommendations, attitudes, and behaviors of personal trainers toward clients of different weight statuses. Fifty-two personal trainers participated in the study. The data collection was organized into two phases. In phase one, trainers read a profile and watched the video displaying an interview of either an obese or an average-weight client. Profiles and video interviews were identical except for weight status. Then, trainers provided exercise recommendations and rated their attitude toward the client. In phase two, trainers personally met an obese or an average-weight mock client. Measures were duration and number of advices provided by the trainer to a question posed by the client and sitting distance between trainer and client. There were no significant differences in exercise intensity ( p = .94), duration of first session ( p = .65), and total exercise duration of first week ( p = .76) prescribed to the obese and average-weight clients. The attitude of the personal trainers toward the obese client were not significantly different from the attitude of personal trainers toward the average-weight client ( p = .58). The number of advices provided ( p = .49), the duration of the answer ( p = .55), and the distance personal trainers sat from the obese client ( p = .68) were not significantly different from the behaviors displayed toward the average-weight client. Personal trainers did not discriminate against obese clients in professional settings.
Reactions to Approach-Distance in Overweight and Normal Weight College Females.
ERIC Educational Resources Information Center
Rogers, Ruth Ann; Thomas, Georgelle
Research has found that the need for personal space is greater for normal persons who are interacting with stigmatized persons, such as overweight people, and that one who is identified as deviant may be more sensitive to environmental cues and react more strongly to affective stimuli. To investigate the reactions to approach/distance among…
Automated LSA Assessment of Summaries in Distance Education: Some Variables to Be Considered
ERIC Educational Resources Information Center
Jorge-Botana, Guillermo; Luzón, José M.; Gómez-Veiga, Isabel; Martín-Cordero, Jesús I.
2015-01-01
A latent semantic analysis-based automated summary assessment is described; this automated system is applied to a real learning from text task in a Distance Education context. We comment on the use of automated content, plagiarism, text coherence measures, and word weights average and their impact on predicting human judges summary scoring. A…
DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, Jonathan B.; Jackson, James M.; Stead, Joseph J.
We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distancesmore » (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.« less
On the mixing time of geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan
In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Wemore » specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).« less
Self-paced exercise program for office workers: impact on productivity and health outcomes.
Low, David; Gramlich, Martha; Engram, Barbara Wright
2007-03-01
The impact of a self-paced exercise program on productivity and health outcomes of 32 adult workers in a large federal office complex was investigated during 3 months. Walking was the sole form of exercise. The first month, during which no walking occurred, was the control period. The second and third months were the experimental period. Participants were divided into three levels based on initial weight and self-determined walking distance goals. Productivity (using the Endicott Work Productivity Scale), walking distance (using a pedometer), and health outcomes (blood pressure, weight, pulse rate, and body fat percentage) were measured weekly. Results from this study, based on a paired t test analysis, suggest that although the self-paced exercise program had no impact on productivity, it lowered blood pressure and promoted weight loss. Further study using a larger sample and a controlled experimental design is recommended to provide conclusive evidence.
Flood, Jessica S; Porphyre, Thibaud; Tildesley, Michael J; Woolhouse, Mark E J
2013-10-08
When modelling infectious diseases, accurately capturing the pattern of dissemination through space is key to providing optimal recommendations for control. Mathematical models of disease spread in livestock, such as for foot-and-mouth disease (FMD), have done this by incorporating a transmission kernel which describes the decay in transmission rate with increasing Euclidean distance from an infected premises (IP). However, this assumes a homogenous landscape, and is based on the distance between point locations of farms. Indeed, underlying the spatial pattern of spread are the contact networks involved in transmission. Accordingly, area-weighted tessellation around farm point locations has been used to approximate field-contiguity and simulate the effect of contiguous premises (CP) culling for FMD. Here, geographic data were used to determine contiguity based on distance between premises' fields and presence of landscape features for two sample areas in Scotland. Sensitivity, positive predictive value, and the True Skill Statistic (TSS) were calculated to determine how point distance measures and area-weighted tessellation compared to the 'gold standard' of the map-based measures in identifying CPs. In addition, the mean degree and density of the different contact networks were calculated. Utilising point distances <1 km and <5 km as a measure for contiguity resulted in poor discrimination between map-based CPs/non-CPs (TSS 0.279-0.344 and 0.385-0.400, respectively). Point distance <1 km missed a high proportion of map-based CPs; <5 km point distance picked up a high proportion of map-based non-CPs as CPs. Area-weighted tessellation performed best, with reasonable discrimination between map-based CPs/non-CPs (TSS 0.617-0.737) and comparable mean degree and density. Landscape features altered network properties considerably when taken into account. The farming landscape is not homogeneous. Basing contiguity on geographic locations of field boundaries and including landscape features known to affect transmission into FMD models are likely to improve individual farm-level accuracy of spatial predictions in the event of future outbreaks. If a substantial proportion of FMD transmission events are by contiguous spread, and CPs should be assigned an elevated relative transmission rate, the shape of the kernel could be significantly altered since ability to discriminate between map-based CPs and non-CPs is different over different Euclidean distances.
Gis-Based Site Selection for Underground Natural Resources Using Fuzzy Ahp-Owa
NASA Astrophysics Data System (ADS)
Sabzevari, A. R.; Delavar, M. R.
2017-09-01
Fuel consumption has significantly increased due to the growth of the population. A solution to address this problem is the underground storage of natural gas. The first step to reach this goal is to select suitable places for the storage. In this study, site selection for the underground natural gas reservoirs has been performed using a multi-criteria decision-making in a GIS environment. The "Ordered Weighted Average" (OWA) operator is one of the multi-criteria decision-making methods for ranking the criteria and consideration of uncertainty in the interaction among the criteria. In this paper, Fuzzy AHP_OWA (FAHP_OWA) is used to determine optimal sites for the underground natural gas reservoirs. Fuzzy AHP_OWA considers the decision maker's risk taking and risk aversion during the decision-making process. Gas consumption rate, temperature, distance from main transportation network, distance from gas production centers, population density and distance from gas distribution networks are the criteria used in this research. Results show that the northeast and west of Iran and the areas around Tehran (Tehran and Alborz Provinces) have a higher attraction for constructing a natural gas reservoir. The performance of the used method was also evaluated. This evaluation was performed using the location of the existing natural gas reservoirs in the country and the site selection maps for each of the quantifiers. It is verified that the method used in this study is capable of modeling different decision-making strategies used by the decision maker with about 88 percent of agreement between the modeling and test data.
Detecting Network Communities: An Application to Phylogenetic Analysis
Andrade, Roberto F. S.; Rocha-Neto, Ivan C.; Santos, Leonardo B. L.; de Santana, Charles N.; Diniz, Marcelo V. C.; Lobão, Thierry Petit; Goés-Neto, Aristóteles; Pinho, Suani T. R.; El-Hani, Charbel N.
2011-01-01
This paper proposes a new method to identify communities in generally weighted complex networks and apply it to phylogenetic analysis. In this case, weights correspond to the similarity indexes among protein sequences, which can be used for network construction so that the network structure can be analyzed to recover phylogenetically useful information from its properties. The analyses discussed here are mainly based on the modular character of protein similarity networks, explored through the Newman-Girvan algorithm, with the help of the neighborhood matrix . The most relevant networks are found when the network topology changes abruptly revealing distinct modules related to the sets of organisms to which the proteins belong. Sound biological information can be retrieved by the computational routines used in the network approach, without using biological assumptions other than those incorporated by BLAST. Usually, all the main bacterial phyla and, in some cases, also some bacterial classes corresponded totally (100%) or to a great extent (>70%) to the modules. We checked for internal consistency in the obtained results, and we scored close to 84% of matches for community pertinence when comparisons between the results were performed. To illustrate how to use the network-based method, we employed data for enzymes involved in the chitin metabolic pathway that are present in more than 100 organisms from an original data set containing 1,695 organisms, downloaded from GenBank on May 19, 2007. A preliminary comparison between the outcomes of the network-based method and the results of methods based on Bayesian, distance, likelihood, and parsimony criteria suggests that the former is as reliable as these commonly used methods. We conclude that the network-based method can be used as a powerful tool for retrieving modularity information from weighted networks, which is useful for phylogenetic analysis. PMID:21573202
Ruffault, Alexis; Czernichow, Sébastien; Hagger, Martin S; Ferrand, Margot; Erichot, Nelly; Carette, Claire; Boujut, Emilie; Flahault, Cécile
The aim of this study was to conduct a comprehensive quantitative synthesis of the effects of mindfulness training interventions on weight-loss and health behaviours in adults with overweight and obesity using meta-analytic techniques. Studies included in the analysis (k=12) were randomised controlled trials investigating the effects of any form of mindfulness training on weight loss, impulsive eating, binge eating, or physical activity participation in adults with overweight and obesity. Random effects meta-analysis revealed that mindfulness training had no significant effect on weight loss, but an overall negative effect on impulsive eating (d=-1.13) and binge eating (d=-.90), and a positive effect on physical activity levels (d=.42). Meta-regression analysis showed that methodological features of included studies accounted for 100% of statistical heterogeneity of the effects of mindfulness training on weight loss (R 2 =1,00). Among methodological features, the only significant predictor of weight loss was follow-up distance from post-intervention (β=1.18; p<.05), suggesting that the longer follow-up distances were associated with greater weight loss. Results suggest that mindfulness training has short-term benefits on health-related behaviours. Future studies should explore the effectiveness of mindfulness training on long-term post-intervention weight loss in adults with overweight and obesity. Copyright © 2016 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
Aschbrenner, Kelly A.; Naslund, John A.; Shevenell, Megan; Mueser, Kim T.; Bartels, Stephen J.
2016-01-01
Objective Effective and scalable lifestyle interventions are needed to address high rates of obesity in people with serious mental illness (SMI). This pilot study evaluated the feasibility of a behavioral weight loss intervention enhanced with peer support and mobile health (mHealth) technology for obese individuals with SMI. Methods The Diabetes Prevention Program Group Lifestyle Balance intervention enhanced with peer support and mHealth technology was implemented in a public mental health setting. Thirteen obese individuals with SMI participated in a pre-post pilot study of the 24-week intervention. Feasibility was assessed by program attendance, and participant satisfaction and suggestions for improving the model. Descriptive changes in weight and fitness were also explored. Results Overall attendance amounted to approximately half (56%) of weekly sessions. At 6-month follow-up, 45% of participants had lost weight, and 45% showed improved fitness by increasing their walking distance. Participants suggested a number of modifications to increase the relevance of the intervention for people with SMI, including less didactic instruction and more active learning, a simplified dietary component, more in depth technology training, and greater attention to mental health. Conclusions The principles of standard behavioral weight loss treatment provide a useful starting point for promoting weight loss in people with SMI. However, adaptions to standard weight loss curricula are needed to enhance engagement, participation, and outcomes to respond to the unique challenges of individuals with SMI. PMID:26462674
Raknes, Guttorm; Hunskaar, Steinar
2014-01-01
We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.
Hussain, Lal
2018-06-01
Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.
NASA Astrophysics Data System (ADS)
Gill, G.; Sakrani, T.; Cheng, W.; Zhou, J.
2017-09-01
Many studies have utilized the spatial correlations among traffic crash data to develop crash prediction models with the aim to investigate the influential factors or predict crash counts at different sites. The spatial correlation have been observed to account for heterogeneity in different forms of weight matrices which improves the estimation performance of models. But very rarely have the weight matrices been compared for the prediction accuracy for estimation of crash counts. This study was targeted at the comparison of two different approaches for modelling the spatial correlations among crash data at macro-level (County). Multivariate Full Bayesian crash prediction models were developed using Decay-50 (distance-based) and Queen-1 (adjacency-based) weight matrices for simultaneous estimation crash counts of four different modes: vehicle, motorcycle, bike, and pedestrian. The goodness-of-fit and different criteria for accuracy at prediction of crash count reveled the superiority of Decay-50 over Queen-1. Decay-50 was essentially different from Queen-1 with the selection of neighbors and more robust spatial weight structure which rendered the flexibility to accommodate the spatially correlated crash data. The consistently better performance of Decay-50 at prediction accuracy further bolstered its superiority. Although the data collection efforts to gather centroid distance among counties for Decay-50 may appear to be a downside, but the model has a significant edge to fit the crash data without losing the simplicity of computation of estimated crash count.
Al-Abadi, Alaa M; Shahid, Shamsuddin
2015-09-01
In this study, index of entropy and catastrophe theory methods were used for demarcating groundwater potential in an arid region using weighted linear combination techniques in geographical information system (GIS) environment. A case study from Badra area in the eastern part of central of Iraq was analyzed and discussed. Six factors believed to have influence on groundwater occurrence namely elevation, slope, aquifer transmissivity and storativity, soil, and distance to fault were prepared as raster thematic layers to facility integration into GIS environment. The factors were chosen based on the availability of data and local conditions of the study area. Both techniques were used for computing weights and assigning ranks vital for applying weighted linear combination approach. The results of application of both modes indicated that the most influential groundwater occurrence factors were slope and elevation. The other factors have relatively smaller values of weights implying that these factors have a minor role in groundwater occurrence conditions. The groundwater potential index (GPI) values for both models were classified using natural break classification scheme into five categories: very low, low, moderate, high, and very high. For validation of generated GPI, the relative operating characteristic (ROC) curves were used. According to the obtained area under the curve, the catastrophe model with 78 % prediction accuracy was found to perform better than entropy model with 77 % prediction accuracy. The overall results indicated that both models have good capability for predicting groundwater potential zones.
Threshold selection for classification of MR brain images by clustering method
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-01
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
27 CFR 555.223 - Table of distances between fireworks process buildings and other specified areas.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Consumer Fireworks and Articles Pyrotechnic, Magazines and Fireworks Shipping Buildings, and Inhabited... permitted. 1 Net weight is the weight of all pyrotechnic compositions, and exposive materials and fuse only. 2 While consumer fireworks or articles pyrotechnic in a finished state are not subject to regulation...
27 CFR 555.223 - Table of distances between fireworks process buildings and other specified areas.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Consumer Fireworks and Articles Pyrotechnic, Magazines and Fireworks Shipping Buildings, and Inhabited... permitted. 1 Net weight is the weight of all pyrotechnic compositions, and exposive materials and fuse only. 2 While consumer fireworks or articles pyrotechnic in a finished state are not subject to regulation...
MacWilliams Identity for M-Spotty Weight Enumerator
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Fujiwara, Eiji
M-spotty byte error control codes are very effective for correcting/detecting errors in semiconductor memory systems that employ recent high-density RAM chips with wide I/O data (e.g., 8, 16, or 32bits). In this case, the width of the I/O data is one byte. A spotty byte error is defined as random t-bit errors within a byte of length b bits, where 1 le t ≤ b. Then, an error is called an m-spotty byte error if at least one spotty byte error is present in a byte. M-spotty byte error control codes are characterized by the m-spotty distance, which includes the Hamming distance as a special case for t =1 or t = b. The MacWilliams identity provides the relationship between the weight distribution of a code and that of its dual code. The present paper presents the MacWilliams identity for the m-spotty weight enumerator of m-spotty byte error control codes. In addition, the present paper clarifies that the indicated identity includes the MacWilliams identity for the Hamming weight enumerator as a special case.
Can hip arthroscopy be performed with conventional knee-length instrumentation?
Pascual-Garrido, Cecilia; McConkey, Mark O; Young, David A; Bravman, Jonathan T; Mei-Dan, Omer
2014-12-01
The purpose of this study was to determine whether hip arthroscopy can be performed using conventional knee-length arthroscopy instrumentation. We included 116 consecutive hip arthroscopies (104 patients) in this study. Age, side of surgery, height (in inches), weight (in pounds), body mass index (BMI), and a subjective assessment of body type (1, muscular; 2, somewhat overweight; 3, overweight; 4, thin; and 5, normal weight) were recorded. The depth from the skin at 2 portal sites to 3 commonly accessed positions (12 o'clock, 3 o'clock, and acetabular fossa) was assessed using a guide with marked notches (in millimeters). Subgroup analysis was performed according to BMI and subjective biotype for each patient. We included 104 patients with a mean age of 35 years (range, 14 to 55 years). As categorized by BMI, 60% of patients were normal weight, 22% were overweight, 16% were obese, and 2% were underweight. All but 8 procedures were performed with conventional knee-length arthroscopic shavers and burrs. The 8 procedures that needed additional hip instrumentation were performed in patients who required ligamentum teres debridement or those with iliopsoas tenotomy. Overall, the distance from skin to socket was less than 11 cm at the 12-o'clock and 3-o'clock positions from both the anterolateral and anterior portals. Obese and overweight patients had statistically longer distances from skin to socket at all 3 measurement points compared with underweight and normal-weight patients. Considering biotype, the distances from skin to socket in underweight, normal-weight, and muscular patients were all equal to or less than 10 cm. The distance from skin to socket at the 12- and 3-o'clock positions is less than 11 cm, suggesting that hip arthroscopy can be performed with conventional knee-length instrumentation devices. In obese and overweight patients and patients requiring ligamentum teres debridement or iliopsoas tendon release, specific hip arthroscopic tools should be available. Level IV, therapeutic case series. Copyright © 2014 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Hirschmann, Anna; Buck, Florian M; Herschel, Ramin; Pfirrmann, Christian W A; Fucentese, Sandro F
2017-03-01
To prospectively compare patellofemoral and tibiofemoral articulations in the upright weight-bearing position with different degrees of flexion using CT in order to gain a more thorough understanding of the development of diseases of the knee joint in a physiological position. CT scans of the knee in 0°, 30°, 60° flexion in the upright weight-bearing position and in 120° flexion upright without weight-bearing were obtained of 10 volunteers (mean age 33.7 ± 6.1 years; range 24-41) using a cone-beam extremity-CT. Two independent readers quantified tibiofemoral and patellofemoral rotation, tibial tuberosity-trochlear groove distance (TTTG) and patellofemoral distance. Tibiofemoral contact points were assessed in relation to the anteroposterior distance of the tibial plateau. Significant differences between degrees of flexion were sought using Wilcoxon signed-rank test (P < 0.05). With higher degrees of flexion, internal tibiofemoral rotation increased (0°/120° flexion; mean, 0.5° ± 4.5/22.4° ± 7.6); external patellofemoral rotation decreased (10.6° ± 7.6/1.6° ± 4.2); TTTG decreased (11.1 mm ±3.7/-2.4 mm ±6.4) and patellofemoral distance decreased (38.7 mm ±3.0/21.0 mm ±7.0). The CP shifted posterior, more pronounced laterally. Significant differences were found for all measurements at all degrees of flexion (P = 0.005-0.037), except between 30° and 60°. ICC was almost perfect (0.80-0.99), except for the assessment of the CP (0.20-0.96). Knee joint articulations change significantly during flexion using upright weight-bearing CT. Progressive internal tibiofemoral rotation leads to a decrease in the TTTG and a posterior shift of the contact points in higher degrees of flexion. This elucidates patellar malalignment predominantly close to extension and meniscal tears commonly affecting the posterior horns.
Comparing interpolation techniques for annual temperature mapping across Xinjiang region
NASA Astrophysics Data System (ADS)
Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang
2016-11-01
Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.
[A New Distance Metric between Different Stellar Spectra: the Residual Distribution Distance].
Liu, Jie; Pan, Jing-chang; Luo, A-li; Wei, Peng; Liu, Meng
2015-12-01
Distance metric is an important issue for the spectroscopic survey data processing, which defines a calculation method of the distance between two different spectra. Based on this, the classification, clustering, parameter measurement and outlier data mining of spectral data can be carried out. Therefore, the distance measurement method has some effect on the performance of the classification, clustering, parameter measurement and outlier data mining. With the development of large-scale stellar spectral sky surveys, how to define more efficient distance metric on stellar spectra has become a very important issue in the spectral data processing. Based on this problem and fully considering of the characteristics and data features of the stellar spectra, a new distance measurement method of stellar spectra named Residual Distribution Distance is proposed. While using this method to measure the distance, the two spectra are firstly scaled and then the standard deviation of the residual is used the distance. Different from the traditional distance metric calculation methods of stellar spectra, when used to calculate the distance between stellar spectra, this method normalize the two spectra to the same scale, and then calculate the residual corresponding to the same wavelength, and the standard error of the residual spectrum is used as the distance measure. The distance measurement method can be used for stellar classification, clustering and stellar atmospheric physical parameters measurement and so on. This paper takes stellar subcategory classification as an example to test the distance measure method. The results show that the distance defined by the proposed method is more effective to describe the gap between different types of spectra in the classification than other methods, which can be well applied in other related applications. At the same time, this paper also studies the effect of the signal to noise ratio (SNR) on the performance of the proposed method. The result show that the distance is affected by the SNR. The smaller the signal-to-noise ratio is, the greater impact is on the distance; While SNR is larger than 10, the signal-to-noise ratio has little effect on the performance for the classification.
Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.
Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo
2017-07-01
Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.
The Tremaine-Weinberg Method for Pattern Speeds Using Hα Emission from Ionized Gas
NASA Astrophysics Data System (ADS)
Beckman, J. E.; Fathi, K.; Piñol, N.; Toonen, S.; Hernandez, O.; Carignan, C.
2008-10-01
The Fabry-Perot interferometer FaNTOmM was used at the 3.6-m CFHT and the 1.6-m Mont Mégantic Telescope to obtain data cubes in Hα of 9 nearby spiral galaxies from which maps in integrated intensity, velocity, and velocity dispersion were derived. We then applied the Tremaine-Weinberg method, in which the pattern speed can be deduced from its velocity field, by finding the integrated value of the mean velocity along a slit parallel to the major axis weighted by the intensity and divided by the weighted mean distance of the velocity points from the tangent point measured along the slit. The measured variables can be used either to make separate calculations of the pattern speed and derive a mean, or in a plot of one against the other for all the points on all slits, from which a best fit value can be derived. Linear fits were found for all the galaxies in the sample. For two galaxies a clearly separate inner pattern speed with a higher value, was also identified and measured.
Luo, Wei; Qi, Yi
2009-12-01
This paper presents an enhancement of the two-step floating catchment area (2SFCA) method for measuring spatial accessibility, addressing the problem of uniform access within the catchment by applying weights to different travel time zones to account for distance decay. The enhancement is proved to be another special case of the gravity model. When applying this enhanced 2SFCA (E2SFCA) to measure the spatial access to primary care physicians in a study area in northern Illinois, we find that it reveals spatial accessibility pattern that is more consistent with intuition and delineates more spatially explicit health professional shortage areas. It is easy to implement in GIS and straightforward to interpret.
Thermal Signature Identification System (TheSIS)
NASA Technical Reports Server (NTRS)
Merritt, Scott; Bean, Brian
2015-01-01
We characterize both nonlinear and high order linear responses of fiber-optic and optoelectronic components using spread spectrum temperature cycling methods. This Thermal Signature Identification System (TheSIS) provides much more detail than conventional narrowband or quasi-static temperature profiling methods. This detail allows us to match components more thoroughly, detect subtle reversible shifts in performance, and investigate the cause of instabilities or irreversible changes. In particular, we create parameterized models of athermal fiber Bragg gratings (FBGs), delay line interferometers (DLIs), and distributed feedback (DFB) lasers, then subject the alternative models to selection via the Akaike Information Criterion (AIC). Detailed pairing of components, e.g. FBGs, is accomplished by means of weighted distance metrics or norms, rather than on the basis of a single parameter, such as center wavelength.
Small arms mini-fire control system: fiber-optic barrel deflection sensor
NASA Astrophysics Data System (ADS)
Rajic, S.; Datskos, P.; Lawrence, W.; Marlar, T.; Quinton, B.
2012-06-01
Traditionally the methods to increase firearms accuracy, particularly at distance, have concentrated on barrel isolation (free floating) and substantial barrel wall thickening to gain rigidity. This barrel stiffening technique did not completely eliminate barrel movement but the problem was significantly reduced to allow a noticeable accuracy enhancement. This process, although highly successful, came at a very high weight penalty. Obviously the goal would be to lighten the barrel (firearm), yet achieve even greater accuracy. Thus, if lightweight barrels could ultimately be compensated for both their static and dynamic mechanical perturbations, the result would be very accurate, yet significantly lighter weight, weapons. We discuss our development of a barrel reference sensor system that is designed to accomplish this ambitious goal. Our optical fiber-based sensor monitors the barrel muzzle position and autonomously compensates for any induced perturbations. The reticle is electronically adjusted in position to compensate for the induced barrel deviation in real time.
NASA Technical Reports Server (NTRS)
Hilado, C. J.; Miller, C. M.
1976-01-01
Rankings of relative toxicity can be markedly affected by changes in test variables. Revision of the USF/NASA toxicity screening test procedure to eliminate the connecting tube and supporting floor and incorporate a 1.0 g sample weight, 200 C starting temperature, and 800 C upper limit temperature for pyrolysis, reversed the rankings of flexible polyurethane and polychloroprene foams, not only in relation to each other, but also in relation to cotton and red oak. Much of the change is attributed to reduction of the distance between the sample and the test animals, and reduction of the sample weight charged. Elimination of the connecting tube increased the relative toxicity of the polyurethane foams. The materials tested were flexible polyurethane foam, without and with fire retardant; rigid polyurethane foam with fire retardant; flexible polychloroprene foam; cotton, Douglas fir, red oak, hemlock, hardboard, particle board, polystyrene, and polymethyl methacrylate.
Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering
NASA Astrophysics Data System (ADS)
Jiang, Lu; Piao, Yan
2018-04-01
The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.
Impact of contact lens zone geometry and ocular optics on bifocal retinal image quality
Bradley, Arthur; Nam, Jayoung; Xu, Renfeng; Harman, Leslie; Thibos, Larry
2014-01-01
Purpose To examine the separate and combined influences of zone geometry, pupil size, diffraction, apodisation and spherical aberration on the optical performance of concentric zonal bifocals. Methods Zonal bifocal pupil functions representing eye + ophthalmic correction were defined by interleaving wavefronts from separate optical zones of the bifocal. A two-zone design (a central circular inner zone surrounded by an annular outer-zone which is bounded by the pupil) and a five-zone design (a central small circular zone surrounded by four concentric annuli) were configured with programmable zone geometry, wavefront phase and pupil transmission characteristics. Using computational methods, we examined the effects of diffraction, Stiles Crawford apodisation, pupil size and spherical aberration on optical transfer functions for different target distances. Results Apodisation alters the relative weighting of each zone, and thus the balance of near and distance optical quality. When spherical aberration is included, the effective distance correction, add power and image quality depend on zone-geometry and Stiles Crawford Effect apodisation. When the outer zone width is narrow, diffraction limits the available image contrast when focused, but as pupil dilates and outer zone width increases, aberrations will limit the best achievable image quality. With two-zone designs, balancing near and distance image quality is not achieved with equal area inner and outer zones. With significant levels of spherical aberration, multi-zone designs effectively become multifocals. Conclusion Wave optics and pupil varying ocular optics significantly affect the imaging capabilities of different optical zones of concentric bifocals. With two-zone bifocal designs, diffraction, pupil apodisation spherical aberration, and zone size influence both the effective add power and the pupil size required to balance near and distance image quality. Five-zone bifocal designs achieve a high degree of pupil size independence, and thus will provide more consistent performance as pupil size varies with light level and convergence amplitude. PMID:24588552
The performance evaluation of a jet flap on an advanced supersonic harrier
NASA Technical Reports Server (NTRS)
Lipera, L. D.; Sandlin, D. R.
1984-01-01
The performance concept of a supersonic vertical and short takeoff and landing (V/STOL) fighter, model 279-3, modified to utilize a jet flap was evaluated. Replacing the rear nozzles of the 279-3 with the jet flap favorably alters the pressure distribution over the airfoil and dramatically increases lift. The result is a significant decrease in takeoff distance, an increase in payload, and an improvement in combat performance. To investigate the benefit in increased payload, the 279-3 and the jet flapped 279-3JF were modeled on the NASA Aircraft Synthesis (ACSYNT) computer code and flown on a 250 feet takeoff distance interdiction mission. The increase in payload weight that the 279-3JF could carry was converted into fuel in one case, and in another, converted to bomb load. When the fuel was increased, the 279-3JF penetrated into enemy territory almost four times the distance of 279-3, and therefore increased mission capability. When the bomb load was increased, the 279-3JF carried 14 bombs the same distance the 279-3 carried four. The increase in mission performance and improvements in turning rates was realized with only a small penalty in increased empty weight.
Chen, H; Li, Z; Bu, S H; Tian, Z Q
2011-02-01
The flight distance, flight time and individual flight activities of males and females of Dendroctonus armandi were recorded during 96-h flight trials using a flight mill system. The body weight, glucose, glycogen and lipid content of four treatments (naturally emerged, starved, phloem-fed and water-fed) were compared among pre-flight, post-flight and unflown controls. There was no significant difference between males and females in total flight distance and flight time in a given 24-h period. The flight distance and flight time of females showed a significant linear decline as the tethered flying continued, but the sustained flight ability of females was better than that of males. The females had higher glycogen and lipid content than the males; however, there was no significant difference between both sexes in glucose content. Water-feeding and phloem-feeding had significant effects on longevity, survival days and flight potential of D. armandi, which resulted in longer feeding days, poorer flight potential and lower energy substrate content. Our results demonstrate that flight distances in general do not differ between water-fed and starved individuals, whereas phloem-fed females and males fly better than water-fed and starved individuals.
Busettini, C; Miles, F A; Schwarz, U; Carl, J R
1994-01-01
Recent experiments on monkeys have indicated that the eye movements induced by brief translation of either the observer or the visual scene are a linear function of the inverse of the viewing distance. For the movements of the observer, the room was dark and responses were attributed to a translational vestibulo-ocular reflex (TVOR) that senses the motion through the otolith organs; for the movements of the scene, which elicit ocular following, the scene was projected and adjusted in size and speed so that the retinal stimulation was the same at all distances. The shared dependence on viewing distance was consistent with the hypothesis that the TVOR and ocular following are synergistic and share central pathways. The present experiments looked for such dependencies on viewing distance in human subjects. When briefly accelerated along the interaural axis in the dark, human subjects generated compensatory eye movements that were also a linear function of the inverse of the viewing distance to a previously fixated target. These responses, which were attributed to the TVOR, were somewhat weaker than those previously recorded from monkeys using similar methods. When human subjects faced a tangent screen onto which patterned images were projected, brief motion of those images evoked ocular following responses that showed statistically significant dependence on viewing distance only with low-speed stimuli (10 degrees/s). This dependence was at best weak and in the reverse direction of that seen with the TVOR, i.e., responses increased as viewing distance increased. We suggest that in generating an internal estimate of viewing distance subjects may have used a confounding cue in the ocular-following paradigm--the size of the projected scene--which was varied directly with the viewing distance in these experiments (in order to preserve the size of the retinal image). When movements of the subject were randomly interleaved with the movements of the scene--to encourage the expectation of ego-motion--the dependence of ocular following on viewing distance altered significantly: with higher speed stimuli (40 degrees/s) many responses (63%) now increased significantly as viewing distance decreased, though less vigorously than the TVOR. We suggest that the expectation of motion results in the subject placing greater weight on cues such as vergence and accommodation that provide veridical distance information in our experimental situation: cue selection is context specific.
2013-01-01
Background T2-weighted cardiovascular magnetic resonance (CMR) is clinically-useful for imaging the ischemic area-at-risk and amount of salvageable myocardium in patients with acute myocardial infarction (MI). However, to date, quantification of oedema is user-defined and potentially subjective. Methods We describe a highly automatic framework for quantifying myocardial oedema from bright blood T2-weighted CMR in patients with acute MI. Our approach retains user input (i.e. clinical judgment) to confirm the presence of oedema on an image which is then subjected to an automatic analysis. The new method was tested on 25 consecutive acute MI patients who had a CMR within 48 hours of hospital admission. Left ventricular wall boundaries were delineated automatically by variational level set methods followed by automatic detection of myocardial oedema by fitting a Rayleigh-Gaussian mixture statistical model. These data were compared with results from manual segmentation of the left ventricular wall and oedema, the current standard approach. Results The mean perpendicular distances between automatically detected left ventricular boundaries and corresponding manual delineated boundaries were in the range of 1-2 mm. Dice similarity coefficients for agreement (0=no agreement, 1=perfect agreement) between manual delineation and automatic segmentation of the left ventricular wall boundaries and oedema regions were 0.86 and 0.74, respectively. Conclusion Compared to standard manual approaches, the new highly automatic method for estimating myocardial oedema is accurate and straightforward. It has potential as a generic software tool for physicians to use in clinical practice. PMID:23548176
Talker Localization Based on Interference between Transmitted and Reflected Audible Sound
NASA Astrophysics Data System (ADS)
Nakayama, Masato; Nakasako, Noboru; Shinohara, Toshihiro; Uebo, Tetsuji
In many engineering fields, distance to targets is very important. General distance measurement method uses a time delay between transmitted and reflected waves, but it is difficult to estimate the short distance. On the other hand, the method using phase interference to measure the short distance has been known in the field of microwave radar. Therefore, we have proposed the distance estimation method based on interference between transmitted and reflected audible sound, which can measure the distance between microphone and target with one microphone and one loudspeaker. In this paper, we propose talker localization method based on distance estimation using phase interference. We expand the distance estimation method using phase interference into two microphones (microphone array) in order to estimate talker position. The proposed method can estimate talker position by measuring the distance and direction between target and microphone array. In addition, talker's speech is regarded as a noise in the proposed method. Therefore, we also propose combination of the proposed method and CSP (Cross-power Spectrum Phase analysis) method which is one of the DOA (Direction Of Arrival) estimation methods. We evaluated the performance of talker localization in real environments. The experimental result shows the effectiveness of the proposed method.
2011-01-01
Background This paper analyses the relationship between public perceptions of access to general practitioners (GPs) surgeries and hospitals against health status, car ownership and geographic distance. In so doing it explores the different dimensions associated with facility access and accessibility. Methods Data on difficulties experienced in accessing health services, respondent health status and car ownership were collected through an attitudes survey. Road distances to the nearest service were calculated for each respondent using a GIS. Difficulty was related to geographic distance, health status and car ownership using logistic generalized linear models. A Geographically Weighted Regression (GWR) was used to explore the spatial non-stationarity in the results. Results Respondent long term illness, reported bad health and non-car ownership were found to be significant predictors of difficulty in accessing GPs and hospitals. Geographic distance was not a significant predictor of difficulty in accessing hospitals but was for GPs. GWR identified the spatial (local) variation in these global relationships indicating locations where the predictive strength of the independent variables was higher or lower than the global trend. The impacts of bad health and non-car ownership on the difficulties experienced in accessing health services varied spatially across the study area, whilst the impacts of geographic distance did not. Conclusions Difficulty in accessing different health facilities was found to be significantly related to health status and car ownership, whilst the impact of geographic distance depends on the service in question. GWR showed how these relationships were varied across the study area. This study demonstrates that the notion of access is a multi-dimensional concept, whose composition varies with location, according to the facility being considered and the health and socio-economic status of the individual concerned. PMID:21787394
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
Latour, Ewa; Latour, Marek; Arlet, Jarosław; Adach, Zdzisław; Bohatyrewicz, Andrzej
2011-07-01
Analysis of pedobarographical data requires geometric identification of specific anatomical areas extracted from recorded plantar pressures. This approach has led to ambiguity in measurements that may underlie the inconsistency of conclusions reported in pedobarographical studies. The goal of this study was to design a new analysis method less susceptible to the projection accuracy of anthropometric points and distance estimation, based on rarely used spatio-temporal indices. Six pedobarographic records per person (three per foot) from a group of 60 children aged 11-12 years were obtained and analyzed. The basis of the analysis was a mutual relationship between two spatio-temporal indices created by excursion of the peak pressure point and the center-of-pressure point on the dynamic pedobarogram. Classification of weight-shift patterns was elaborated and performed, and their frequencies of occurrence were assessed. This new method allows an assessment of body weight shift through the plantar pressure surface based on distribution analysis of spatio-temporal indices not affected by the shape of this surface. Analysis of the distribution of the created index confirmed the existence of typical ways of weight shifting through the plantar surface of the foot during gait, as well as large variability of the intrasubject occurrence. This method may serve as the basis for interpretation of foot functional features and may extend the clinical usefulness of pedobarography. Copyright © 2011 Elsevier B.V. All rights reserved.
Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian
2017-01-01
The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916
NASA Astrophysics Data System (ADS)
Xu, Kai; Wang, Yiwen; Wang, Yueming; Wang, Fang; Hao, Yaoyao; Zhang, Shaomin; Zhang, Qiaosheng; Chen, Weidong; Zheng, Xiaoxiang
2013-04-01
Objective. The high-dimensional neural recordings bring computational challenges to movement decoding in motor brain machine interfaces (mBMI), especially for portable applications. However, not all recorded neural activities relate to the execution of a certain movement task. This paper proposes to use a local-learning-based method to perform neuron selection for the gesture prediction in a reaching and grasping task. Approach. Nonlinear neural activities are decomposed into a set of linear ones in a weighted feature space. A margin is defined to measure the distance between inter-class and intra-class neural patterns. The weights, reflecting the importance of neurons, are obtained by minimizing a margin-based exponential error function. To find the most dominant neurons in the task, 1-norm regularization is introduced to the objective function for sparse weights, where near-zero weights indicate irrelevant neurons. Main results. The signals of only 10 neurons out of 70 selected by the proposed method could achieve over 95% of the full recording's decoding accuracy of gesture predictions, no matter which different decoding methods are used (support vector machine and K-nearest neighbor). The temporal activities of the selected neurons show visually distinguishable patterns associated with various hand states. Compared with other algorithms, the proposed method can better eliminate the irrelevant neurons with near-zero weights and provides the important neuron subset with the best decoding performance in statistics. The weights of important neurons converge usually within 10-20 iterations. In addition, we study the temporal and spatial variation of neuron importance along a period of one and a half months in the same task. A high decoding performance can be maintained by updating the neuron subset. Significance. The proposed algorithm effectively ascertains the neuronal importance without assuming any coding model and provides a high performance with different decoding models. It shows better robustness of identifying the important neurons with noisy signals presented. The low demand of computational resources which, reflected by the fast convergence, indicates the feasibility of the method applied in portable BMI systems. The ascertainment of the important neurons helps to inspect neural patterns visually associated with the movement task. The elimination of irrelevant neurons greatly reduces the computational burden of mBMI systems and maintains the performance with better robustness.
NASA Astrophysics Data System (ADS)
Kozoderov, V. V.; Kondranin, T. V.; Dmitriev, E. V.
2017-12-01
The basic model for the recognition of natural and anthropogenic objects using their spectral and textural features is described in the problem of hyperspectral air-borne and space-borne imagery processing. The model is based on improvements of the Bayesian classifier that is a computational procedure of statistical decision making in machine-learning methods of pattern recognition. The principal component method is implemented to decompose the hyperspectral measurements on the basis of empirical orthogonal functions. Application examples are shown of various modifications of the Bayesian classifier and Support Vector Machine method. Examples are provided of comparing these classifiers and a metrical classifier that operates on finding the minimal Euclidean distance between different points and sets in the multidimensional feature space. A comparison is also carried out with the " K-weighted neighbors" method that is close to the nonparametric Bayesian classifier.
Model of Decision Making through Consensus in Ranking Case
NASA Astrophysics Data System (ADS)
Tarigan, Gim; Darnius, Open
2018-01-01
The basic problem to determine ranking consensus is a problem to combine some rankings those are decided by two or more Decision Maker (DM) into ranking consensus. DM is frequently asked to present their preferences over a group of objects in terms of ranks, for example to determine a new project, new product, a candidate in a election, and so on. The problem in ranking can be classified into two major categories; namely, cardinal and ordinal rankings. The objective of the study is to obtin the ranking consensus by appying some algorithms and methods. The algorithms and methods used in this study were partial algorithm, optimal ranking consensus, BAK (Borde-Kendal)Model. A method proposed as an alternative in ranking conssensus is a Weighted Distance Forward-Backward (WDFB) method, which gave a little difference i ranking consensus result compare to the result oethe example solved by Cook, et.al (2005).
NASA Astrophysics Data System (ADS)
Lee, Daeho; Lee, Seohyung
2017-11-01
We propose an image stitching method that can remove ghost effects and realign the structure misalignments that occur in common image stitching methods. To reduce the artifacts caused by different parallaxes, an optimal seam pair is selected by comparing the cross correlations from multiple seams detected by variable cost weights. Along the optimal seam pair, a histogram of oriented gradients is calculated, and feature points for matching are detected. The homography is refined using the matching points, and the remaining misalignment is eliminated using the propagation of deformation vectors calculated from matching points. In multiband blending, the overlapping regions are determined from a distance between the matching points to remove overlapping artifacts. The experimental results show that the proposed method more robustly eliminates misalignments and overlapping artifacts than the existing method that uses single seam detection and gradient features.
NREL Highlight: Truck Platooning Testing; NREL (National Renewable Energy Laboratory)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
NREL's fleet test and evaluation team assesses the fuel savings potential of semi-automated truck platooning of line-haul sleeper cabs with modern aerodynamics. Platooning reduces aerodynamic drag by grouping vehicles together and safely decreasing the distance between them via electronic coupling, which allows multiple vehicles to accelerate or brake simultaneously. In 2014, the team conducted track testing of three SmartWay tractor - two platooned tractors and one control tractor—at varying steady-state speeds, following distances, and gross vehicle weights. While platooning improved fuel economy at all speeds, travel at 55 mph resulted in the best overall miles per gallon. The lead truckmore » demonstrated fuel savings up to 5.3% while the trailing truck saved up to 9.7%. A number of conditions impact the savings attainable, including ambient temperature, distance between lead and trailing truck, and payload weight. Future studies may look at ways to optimize system fuel efficiency and emissions reductions.« less
Two-generation reproductive toxicity study of tributyltin chloride in female rats.
Ogata, R; Omura, M; Shimasaki, Y; Kubo, K; Oshima, Y; Aou, S; Inoue, N
2001-05-25
A two-generation reproductive toxicity study of the effects of tributyltin chloride (TBTCl) was conducted in female rats using dietary concentrations of 5, 25, and 125 ppm TBTCl. Reproductive outcomes of dams (number and body weight of pups and the percentage of live pups) and the growth of female pups (the day of eye opening and body weight gain) were significantly decreased in the 125 ppm TBTCl group. A delay in vaginal opening and impaired estrous cyclicity were also observed in the 125 ppm TBTCl group. However, an increase in anogenital distance was found in all TBTCl groups on postnatal d 1. A dose-effect relationship was observed in TBTCl-induced changes in anogenital distance. These results indicate that the whole-life exposure to TBTCl affects the sexual development and reproductive function of female rats. In addition, the TBTCl-induced increase in anogenital distance seems to suggest it may exert a masculinizing effect on female neonates. However, the concentrations of TBTCl used in this study are not environmentally relevant.
Trope, Yaacov; Liberman, Nira
2009-01-01
Building on the assumption that interpersonal similarity is a form of social distance, the current research examines the manner in which similarity influences the representation and judgment of others' actions. On the basis of a construal level approach, we hypothesized that greater levels of similarity would increase the relative weight of subordinate and secondary features of information in judgments of others' actions. The results of four experiments showed that compared to corresponding judgments of a dissimilar target, participants exposed to a similar target person identified that person's actions in relatively more subordinate means-related rather than superordinate ends-related terms (Experiment 1), perceived his or her actions to be determined more by feasibility and less by desirability concerns (Experiment 3), and gave more weight to secondary aspects in judgments of the target's decisions (Experiment 2) and performance (Experiment 4). Implications for the study of interpersonal similarity, as well as social distance in general, are discussed. PMID:19352440
FOOD SHOPPING BEHAVIORS AND SOCIOECONOMIC STATUS INFLUENCE OBESITY RATES IN SEATTLE AND IN PARIS
Drewnowski, Adam; Moudon, Anne Vernez; Jiao, Junfeng; Aggarwal, Anju; Charreire, Helene; Chaix, Basile
2014-01-01
Objective To compare the associations between food environment at the individual level, socioeconomic status (SES) and obesity rates in two cities: Seattle and Paris. Methods Analyses of the SOS (Seattle Obesity Study) were based on a representative sample of 1340 adults in metropolitan Seattle and King County. The RECORD (Residential Environment and Coronary Heart Disease) cohort analyses were based on 7,131 adults in central Paris and suburbs. Data on socio-demographics, health and weight were obtained from a telephone survey (SOS) and from in-person interviews (RECORD). Both studies collected data on and geocoded home addresses and food shopping locations. Both studies calculated GIS network distances between home and the supermarket that study respondents listed as their primary food source. Supermarkets were further stratified into three categories by price. Modified Poisson regression models were used to test the associations among food environment variables, SES and obesity. Results Physical distance to supermarkets was unrelated to obesity risk. By contrast, lower education and incomes, lower surrounding property values, and shopping at lower-cost stores were consistently associated with higher obesity risk. Conclusion Lower SES was linked to higher obesity risk in both Paris and Seattle, despite differences in urban form, the food environments, and in the respective systems of health care. Cross-country comparisons can provide new insights into the social determinants of weight and health. PMID:23736365
Broad phonetic class definition driven by phone confusions
NASA Astrophysics Data System (ADS)
Lopes, Carla; Perdigão, Fernando
2012-12-01
Intermediate representations between the speech signal and phones may be used to improve discrimination among phones that are often confused. These representations are usually found according to broad phonetic classes, which are defined by a phonetician. This article proposes an alternative data-driven method to generate these classes. Phone confusion information from the analysis of the output of a phone recognition system is used to find clusters at high risk of mutual confusion. A metric is defined to compute the distance between phones. The results, using TIMIT data, show that the proposed confusion-driven phone clustering method is an attractive alternative to the approaches based on human knowledge. A hierarchical classification structure to improve phone recognition is also proposed using a discriminative weight training method. Experiments show improvements in phone recognition on the TIMIT database compared to a baseline system.
Komaromy, Andras M; Brooks, Dennis E; Kallberg, Maria E; Dawson, William W; Sapp, Harold L; Sherwood, Mark B; Lambrou, George N; Percicot, Christine L
2003-05-01
The purpose of our study was to determine changes in amplitudes and implicit times of retinal and cortical pattern evoked potentials with increasing body weight in young, growing rhesus macaques (Macaca mulatta). Retinal and cortical pattern evoked potentials were recorded from 29 male rhesus macaques between 3 and 7 years of age. Thirteen animals were reexamined after 11 months. Computed tomography (CT) was performed on two animals to measure the distance between the location of the skin electrode and the surface of the striate cortex. Spearman correlation coefficients were calculated to describe the relationship between body weights and either root mean square (rms) amplitudes or implicit times. For 13 animals rms amplitudes and implicit times were compared with the Wilcoxon matched pairs signed rank test for recordings taken 11 months apart. Highly significant correlations between increases in body weights and decreases in cortical rms amplitudes were noted in 29 monkeys (p < 0.0005). No significant changes were found in the cortical rms amplitudes in thirteen monkeys over 11 months. Computed tomography showed a large increase of soft tissue thickness over the skull and striate cortex with increased body weight. The decreased amplitude in cortical evoked potentials with weight gain associated with aging can be explained by the increased distance between skin electrode and striate cortex due to soft tissue thickening (passive attenuation).
Why even active people get fatter--the asymmetric effects ofincreasing and decreasing exercise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Paul T.
2006-01-06
Background: Public health policies for preventing obesityneed guidelines for active individuals who are at risk due to exerciserecidivism. Methods: Changes in adiposity were compared to the runningdistances at baseline and follow-up in men and women whose reportedexercise increased (N=4,632 and 1,953, respectively) or decreased (17,280and 5,970, respectively) during 7.7 years of follow-up. Results: PerDelta km/wk, decreases in running distance caused over four-fold greaterweight gain between 0-8 km/wk (slope+-SE, males: -0.068+ -0.005 kg/m2,females: -0.080+-0.01 kg/m2) than between 32-48 km/wk (-0.017+-0.002 and-0.010+-0.005 kg/m2, respectively). In contrast, increases in runningdistance produced the smallest weight losses between 0-8 km/wk andstatistically significant weight loss onlymore » above 16 km/wk in males and 32km/wk in females. Above 32 km/wk (30 kcal/kg) in men and 16 km/wk (15kcal/kg) in women, weight loss from increasing exercise was equal to orgreater than weight gained with decreasing exercise, otherwise weightgain exceeded weight loss. Substantial weight gain occurred in runnerswho quit running, which would be mostly retained with resumed activity.Conclusion: Public health recommendations should warn against the risksof irreversible weight gain with exercise cessation. Weight gained due toreductions in exercise below 30 kcal/kg in men and 15 kcal/kg in womenmay not be reversed by resuming prior activity. Current IOM guidelines(i.e., maintain total energy expenditure at 160 percent of basal) agreewith the men s exercise threshold for symmetric weight change withchanging exercise levels.« less
A Sequential Ensemble Prediction System at Convection Permitting Scales
NASA Astrophysics Data System (ADS)
Milan, M.; Simmer, C.
2012-04-01
A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.
Registration of High Angular Resolution Diffusion MRI Images Using 4th Order Tensors⋆
Barmpoutis, Angelos; Vemuri, Baba C.; Forder, John R.
2009-01-01
Registration of Diffusion Weighted (DW)-MRI datasets has been commonly achieved to date in literature by using either scalar or 2nd-order tensorial information. However, scalar or 2nd-order tensors fail to capture complex local tissue structures, such as fiber crossings, and therefore, datasets containing fiber-crossings cannot be registered accurately by using these techniques. In this paper we present a novel method for non-rigidly registering DW-MRI datasets that are represented by a field of 4th-order tensors. We use the Hellinger distance between the normalized 4th-order tensors represented as distributions, in order to achieve this registration. Hellinger distance is easy to compute, is scale and rotation invariant and hence allows for comparison of the true shape of distributions. Furthermore, we propose a novel 4th-order tensor re-transformation operator, which plays an essential role in the registration procedure and shows significantly better performance compared to the re-orientation operator used in literature for DTI registration. We validate and compare our technique with other existing scalar image and DTI registration methods using simulated diffusion MR data and real HARDI datasets. PMID:18051145
MASTtreedist: visualization of tree space based on maximum agreement subtree.
Huang, Hong; Li, Yongji
2013-01-01
Phylogenetic tree construction process might produce many candidate trees as the "best estimates." As the number of constructed phylogenetic trees grows, the need to efficiently compare their topological or physical structures arises. One of the tree comparison's software tools, the Mesquite's Tree Set Viz module, allows the rapid and efficient visualization of the tree comparison distances using multidimensional scaling (MDS). Tree-distance measures, such as Robinson-Foulds (RF), for the topological distance among different trees have been implemented in Tree Set Viz. New and sophisticated measures such as Maximum Agreement Subtree (MAST) can be continuously built upon Tree Set Viz. MAST can detect the common substructures among trees and provide more precise information on the similarity of the trees, but it is NP-hard and difficult to implement. In this article, we present a practical tree-distance metric: MASTtreedist, a MAST-based comparison metric in Mesquite's Tree Set Viz module. In this metric, the efficient optimizations for the maximum weight clique problem are applied. The results suggest that the proposed method can efficiently compute the MAST distances among trees, and such tree topological differences can be translated as a scatter of points in two-dimensional (2D) space. We also provide statistical evaluation of provided measures with respect to RF-using experimental data sets. This new comparison module provides a new tree-tree pairwise comparison metric based on the differences of the number of MAST leaves among constructed phylogenetic trees. Such a new phylogenetic tree comparison metric improves the visualization of taxa differences by discriminating small divergences of subtree structures for phylogenetic tree reconstruction.
Nearest neighbor imputation using spatial–temporal correlations in wireless sensor networks
Li, YuanYuan; Parker, Lynne E.
2016-01-01
Missing data is common in Wireless Sensor Networks (WSNs), especially with multi-hop communications. There are many reasons for this phenomenon, such as unstable wireless communications, synchronization issues, and unreliable sensors. Unfortunately, missing data creates a number of problems for WSNs. First, since most sensor nodes in the network are battery-powered, it is too expensive to have the nodes retransmit missing data across the network. Data re-transmission may also cause time delays when detecting abnormal changes in an environment. Furthermore, localized reasoning techniques on sensor nodes (such as machine learning algorithms to classify states of the environment) are generally not robust enough to handle missing data. Since sensor data collected by a WSN is generally correlated in time and space, we illustrate how replacing missing sensor values with spatially and temporally correlated sensor values can significantly improve the network’s performance. However, our studies show that it is important to determine which nodes are spatially and temporally correlated with each other. Simple techniques based on Euclidean distance are not sufficient for complex environmental deployments. Thus, we have developed a novel Nearest Neighbor (NN) imputation method that estimates missing data in WSNs by learning spatial and temporal correlations between sensor nodes. To improve the search time, we utilize a kd-tree data structure, which is a non-parametric, data-driven binary search tree. Instead of using traditional mean and variance of each dimension for kd-tree construction, and Euclidean distance for kd-tree search, we use weighted variances and weighted Euclidean distances based on measured percentages of missing data. We have evaluated this approach through experiments on sensor data from a volcano dataset collected by a network of Crossbow motes, as well as experiments using sensor data from a highway traffic monitoring application. Our experimental results show that our proposed 𝒦-NN imputation method has a competitive accuracy with state-of-the-art Expectation–Maximization (EM) techniques, while using much simpler computational techniques, thus making it suitable for use in resource-constrained WSNs. PMID:28435414
Gregoski, Mathew J; Newton, Janis; Ling, Catherine G; Blaylock, Kathleen; Smith, Sheila A O; Paguntalan, John; Treiber, Frank A
2016-04-06
This pilot study investigated the effectiveness of a distance-based e-health program delivered across multiple rural Federal Credit Union worksites that focused on physical activity and dietary education. Program design and implementation were based on the premises of Social Impact Theory (SIT). A sample of fifty-four participants (47 white. 7 black) aged 24 to 58 across different worksite locations completed 10 weeks of e-health delivered physical activity and dietary intervention. Pre to post weight changes were examined as a primary outcome. The findings showed that regardless of worksite location, participants on average reduced their weight by 10.13 lbs if they completed both the exercise and lunch and learn components of the study compared to a decrease of 2.73 lbs for participants who chose not to engage in the exercise related activities. Participant dropout from either group was less than four percent. The results of this study show the beneficial influence of physical activity integration using SIT upon distance programs targeting weight loss. In addition, the high adherence and weight loss success show promise and demonstrates the potential for e-health delivered exercise and lifestyle interventions. Further replication of results via additional randomized controlled trials is needed.
Li, Dan-Dan; Zheng, Hong-Hao
2018-01-01
In China’s industrialization process, the effective regulation of energy and environment can promote the positive externality of energy consumption while reducing negative externality, which is an important means for realizing the sustainable development of an economic society. The study puts forward an improved technique for order preference by similarity to an ideal solution based on entropy weight and Mahalanobis distance (briefly referred as E-M-TOPSIS). The performance of the approach was verified to be satisfactory. By separately using traditional and improved TOPSIS methods, the study carried out the empirical appraisals on the external performance of China’s energy regulation during 1999~2015. The results show that the correlation between the performance indexes causes the significant difference between the appraisal results of E-M-TOPSIS and traditional TOPSIS. The E-M-TOPSIS takes the correlation between indexes into account and generally softens the closeness degree compared with traditional TOPSIS. Moreover, it makes the relative closeness degree fluctuate within a small-amplitude. The results conform to the practical condition of China’s energy regulation and therefore the E-M-TOPSIS is favorably applicable for the external performance appraisal of energy regulation. Additionally, the external economic performance and social responsibility performance (including environmental and energy safety performances) based on the E-M-TOPSIS exhibit significantly different fluctuation trends. The external economic performance dramatically fluctuates with a larger fluctuation amplitude, while the social responsibility performance exhibits a relatively stable interval fluctuation. This indicates that compared to the social responsibility performance, the fluctuation of external economic performance is more sensitive to energy regulation. PMID:29385781
Adaptive vector validation in image velocimetry to minimise the influence of outlier clusters
NASA Astrophysics Data System (ADS)
Masullo, Alessandro; Theunissen, Raf
2016-03-01
The universal outlier detection scheme (Westerweel and Scarano in Exp Fluids 39:1096-1100, 2005) and the distance-weighted universal outlier detection scheme for unstructured data (Duncan et al. in Meas Sci Technol 21:057002, 2010) are the most common PIV data validation routines. However, such techniques rely on a spatial comparison of each vector with those in a fixed-size neighbourhood and their performance subsequently suffers in the presence of clusters of outliers. This paper proposes an advancement to render outlier detection more robust while reducing the probability of mistakenly invalidating correct vectors. Velocity fields undergo a preliminary evaluation in terms of local coherency, which parametrises the extent of the neighbourhood with which each vector will be compared subsequently. Such adaptivity is shown to reduce the number of undetected outliers, even when implemented in the afore validation schemes. In addition, the authors present an alternative residual definition considering vector magnitude and angle adopting a modified Gaussian-weighted distance-based averaging median. This procedure is able to adapt the degree of acceptable background fluctuations in velocity to the local displacement magnitude. The traditional, extended and recommended validation methods are numerically assessed on the basis of flow fields from an isolated vortex, a turbulent channel flow and a DNS simulation of forced isotropic turbulence. The resulting validation method is adaptive, requires no user-defined parameters and is demonstrated to yield the best performances in terms of outlier under- and over-detection. Finally, the novel validation routine is applied to the PIV analysis of experimental studies focused on the near wake behind a porous disc and on a supersonic jet, illustrating the potential gains in spatial resolution and accuracy.
NASA Astrophysics Data System (ADS)
Prasetyo, S. Y. J.; Hartomo, K. D.
2018-01-01
The Spatial Plan of the Province of Central Java 2009-2029 identifies that most regencies or cities in Central Java Province are very vulnerable to landslide disaster. The data are also supported by other data from Indonesian Disaster Risk Index (In Indonesia called Indeks Risiko Bencana Indonesia) 2013 that suggest that some areas in Central Java Province exhibit a high risk of natural disasters. This research aims to develop an application architecture and analysis methodology in GIS to predict and to map rainfall distribution. We propose our GIS architectural application of “Multiplatform Architectural Spatiotemporal” and data analysis methods of “Triple Exponential Smoothing” and “Spatial Interpolation” as our significant scientific contribution. This research consists of 2 (two) parts, namely attribute data prediction using TES method and spatial data prediction using Inverse Distance Weight (IDW) method. We conduct our research in 19 subdistricts in the Boyolali Regency, Central Java Province, Indonesia. Our main research data is the biweekly rainfall data in 2000-2016 Climatology, Meteorology, and Geophysics Agency (In Indonesia called Badan Meteorologi, Klimatologi, dan Geofisika) of Central Java Province and Laboratory of Plant Disease Observations Region V Surakarta, Central Java. The application architecture and analytical methodology of “Multiplatform Architectural Spatiotemporal” and spatial data analysis methodology of “Triple Exponential Smoothing” and “Spatial Interpolation” can be developed as a GIS application framework of rainfall distribution for various applied fields. The comparison between the TES and IDW methods show that relative to time series prediction, spatial interpolation exhibit values that are approaching actual. Spatial interpolation is closer to actual data because computed values are the rainfall data of the nearest location or the neighbour of sample values. However, the IDW’s main weakness is that some area might exhibit the rainfall value of 0. The representation of 0 in the spatial interpolation is mainly caused by the absence of rainfall data in the nearest sample point or too far distance that produces smaller weight.
Dual-band beacon experiment over Southeast Asia for ionospheric irregularity analysis
NASA Astrophysics Data System (ADS)
Watthanasangmechai, K.; Yamamoto, M.; Saito, A.; Saito, S.; Maruyama, T.; Tsugawa, T.; Nishioka, M.
2013-12-01
An experiment of dual-band beacon over Southeast Asia was started in March 2012 in order to capture and analyze ionospheric irregularities in equatorial region. Five GNU Radio Beacon Receivers (GRBRs) were aligned along 100 degree geographic longitude. The distances between the stations reach more than 500 km. The field of view of this observational network covers +/- 20 degree geomagnetic latitude including the geomagnetic equator. To capture ionospheric irregularities, the absolute TEC estimation technique was developed. The two-station method (Leitinger et al., 1975) is generally accepted as a suitable method to estimate TEC offsets of dual-band beacon experiment. However, the distances between the stations directly affect on the robustness of the technique. In Southeast Asia, the observational network is too sparse to attain a benefit of the classic two-station method. Moreover, the least-squares approch used in the two-station method tries too much to adjust the small scales of the TEC distribution which are the local minima. We thus propose a new technique to estimate the TEC offsets with the supporting data from absolute GPS-TEC from local GPS receivers and the ionospheric height from local ionosondes. The key of the proposed technique is to utilize the brute-force technique with weighting function to find the TEC offset set that yields a global minimum of RMSE in whole parameter space. The weight is not necessary when the TEC distribution is smooth, while it significantly improves the TEC estimation during the ESF events. As a result, the latitudinal TEC shows double-hump distribution because of the Equatorial Ionization Anomaly (EIA). In additions, the 100km-scale fluctuations from an Equatorial Spread F (ESF) are captured at night time in equinox seasons. The plausible linkage of the meridional wind with triggering of ESF is under invatigating and will be presented. The proposed method is successful to estimate the latitudinal TEC distribution from dual-band frequency beacon data for the sparse observational network in Southeast Asia which may be useful for other equatorial sectors like Affrican region as well.
Generalising Ward's Method for Use with Manhattan Distances.
Strauss, Trudie; von Maltitz, Michael Johan
2017-01-01
The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.
Akkaya, Nuray; Akkaya, Semih; Gungor, Harun R; Yaşar, Gokce; Atalay, Nilgun Simsir; Sahin, Fusun
2017-01-01
Although functional results of combined rehabilitation programs are reported, there have been no reports studying the effects of solo pendulum exercises on ultrasonographic measurements of acromiohumeral distance (AHD). To investigate the effects of weighted and un-weighted pendulum exercises on ultrasonographic AHD and clinical symptoms in patients with subacromial impingement syndrome. Patients with subacromial impingement syndrome were randomized to performing weighted (1.5 kilograms hand held dumbbell, N= 18) or un-weighted (free of weight, N= 16) pendulum exercises for 4 weeks, 3 sessions/day. Exercises were repeated for each direction of shoulder motion in each session (ten minutes). Clinical situation was evaluated by Constant score and Shoulder Pain Disability Index (SPADI). Ultrasonographic measurements of AHD at 0°, 30° and 60° shoulder abduction were performed. All clinical and ultrasonographic evaluations were performed at the beginning of the exercise program and at end of 4 weeks of exercise program. Thirty-four patients (23 females, 11 males; mean age 41.7 ± 8.9 years) were evaluated. Significant clinical improvements were detected in both exercise groups between pre and post-treatment evaluations (p < 0.05). There was no significant difference for pre and post-treatment AHD measurements at 0°, 30°, and 60° shoulder abduction between groups (p > 0.05). There was no significant difference for pre and post-treatment narrowing of AHD (narrowing of 0°-30°, and 0°-60°) between groups (p > 0.05). While significant clinical improvements were achieved with both weighted and un-weighted solo pendulum exercises, no significant difference was detected for ultrasonographic AHD measurements between exercise groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gros, S; Roeske, J; Surucu, M
Purpose: To develop a novel method to monitor external anatomical changes in head and neck cancer patients in order to help guide adaptive radiotherapy decisions. Methods: The method, developed in MATLAB, reveals internal anatomical changes based on variations observed in external anatomy. Weekly kV-CBCT scans from 11 Head and neck patients were retrospectively analyzed. The pre-processing step first corrects each CBCT for artifacts and removes pixels from the immobilization mask to produce an accurate external contour of the patient’s skin. After registering the CBCTs to the initial planning CT, the external contours from each CBCT (CBCTn) are transferred to themore » first week — reference — CBCT{sub 1}. Contour radii, defined as the distances between an external contour and the central pixel of each CBCT slice, are calculated for each scan at angular increments of 1 degree. The changes in external anatomy are then quantified by the difference in radial distance between the external contours of CBCT1 and CBCTn. The radial difference is finally displayed on a 2D intensity map (angle vs radial distance difference) in order to highlight regions of interests with significant changes. Results: The 2D radial difference maps provided qualitative and quantitative information, such as the location and the magnitude of external contour divergences and the rate at which these deviations occur. With this method, anatomical changes due to tumor volume shrinkage and patient weight loss were clearly identified and could be correlated with the under-dosage of targets or over-dosage of OARs. Conclusion: This novel method provides an efficient tool to visualize 3D external anatomical modification on a single 2D map. It quickly pinpoints the location of differences in anatomy during the course of radiotherapy, which can help determine if a treatment plan needs to be adapted.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, O; Yuan, J; Law, M
Purpose: Signal-to-noise ratio(SNR) of MR abdominal imaging in diagnostic radiology is maximized by minimizing the coil-to-patient distance. However, for radiotherapy applications, customized vacuum-bag is needed for abdominal immobilization at the cost of the increasing distance to the posterior spine coil. This sub-optimized coil setting for RT applications may compromise image quality, such as SNR and homogeneity, thus potentially affect tissue delineation. In this study, we quantitatively evaluate the effect of the vertical position change on SNR and image quality change using an ACR MR phantom. Methods: An ACR MR phantom was placed on the flat couch top. Images were acquiredmore » using an 18-channel body array coil and spine coil on a dedicated 1.5T MR-simulator. The scan was repeated three times with the ACR phantom elevated up to 7.5cm from the couch top, with a step size of 2.5cm. All images were acquired using standard ACR test sequence protocol of 2D spin-echo T1-weighted(TR/TE=500/200ms) and T2-weighted(TR/TE1/TE2=2000/20/80) sequences. For all scans, pre-scan normalization was turned on, and the distance between the phantom and the anterior 18-channel body array coil was kept constant. SNR was calculated using the slice with a large water-only region of the phantom. Percent intensity uniformity(PIU) and low contrast object detectability(LCD) were assessed by following ACR test guidelines. Results: The decrease in image SNR(from 335.8 to 169.3) and LCD(T1: from 31 to 19 spokes, T2: 26 to 16 spokes) were observed with increasing vertical distance. After elevating the phantom by 2.5cm(approximately the thickness of standard vacuum-bag), SNR change(from 335.8 to 275.5) and LCD(T1: 31 to 26 spokes, T2: 26 to 21 spokes) change were noted. However, similar PIU was obtained for all choices of vertical distance (T1: 94.5%–95.0%, T2: 94.4%–96.8%). Conclusion: After elevating the scan object, reduction in SNR level and contrast detectability but no change in image homogeneity was observed.« less
Steinbrück, Lars; McHardy, Alice Carolyn
2012-01-01
Distinguishing mutations that determine an organism's phenotype from (near-) neutral ‘hitchhikers’ is a fundamental challenge in genome research, and is relevant for numerous medical and biotechnological applications. For human influenza viruses, recognizing changes in the antigenic phenotype and a strains' capability to evade pre-existing host immunity is important for the production of efficient vaccines. We have developed a method for inferring ‘antigenic trees’ for the major viral surface protein hemagglutinin. In the antigenic tree, antigenic weights are assigned to all tree branches, which allows us to resolve the antigenic impact of the associated amino acid changes. Our technique predicted antigenic distances with comparable accuracy to antigenic cartography. Additionally, it identified both known and novel sites, and amino acid changes with antigenic impact in the evolution of influenza A (H3N2) viruses from 1968 to 2003. The technique can also be applied for inference of ‘phenotype trees’ and genotype–phenotype relationships from other types of pairwise phenotype distances. PMID:22532796
Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps
NASA Astrophysics Data System (ADS)
Gundogdu, Ismail Bulent
2017-01-01
Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.
Non-iterative distance constraints enforcement for cloth drapes simulation
NASA Astrophysics Data System (ADS)
Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno
2016-03-01
A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.
Geostatistical modeling of riparian forest microclimate and its implications for sampling
Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.
2011-01-01
Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.
Microsatellites evolve more rapidly in humans than in chimpanzees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubinsztein, D.C.; Leggo, J.; Amos, W.
1995-12-10
Microsatellites are highly polymorphic markers consisting of varying numbers of tandem repeats. At different loci, these repeats can consist of one to five nucleotides. Microsatellites have been used in many fields of genetics, including genetic mapping, linkage disequilibrium analyses, forensic studies, and population genetics. It is important that we understand their mutational processes better so that they can be exploited optimally for studies of human diversity and evolutionary genetics. We have analyzed 24 microsatellite loci in chimpanzees, East Anglians, and Sub-Saharan Africans. The stepwise-weighted genetic distances between the humans and the chimpanzees and between the two human populations were calculatedmore » according to the method described by Deka et al. The ratio of the genetic distances between the chimpanzees and the humans relative to that between the Africans and the East Anglians was more than 10 times smaller than expected. This suggests that microsatellites have evolved more rapidly in humans than in chimpanzees. 12 refs., 1 tab.« less
Spatial interpolation of pesticide drift from hand-held knapsack sprayers used in potato production
NASA Astrophysics Data System (ADS)
Garcia-Santos, Glenda; Pleschberger, Martin; Scheiber, Michael; Pilz, Jürgen
2017-04-01
Tropical mountainous regions in developing countries are often neglected in research and policy but represent key areas to be considered if sustainable agricultural and rural development is to be promoted. One example is the lack of information of pesticide drift soil deposition, which can support pesticide risk assessment for soil, surface water, bystanders and off-target plants and fauna. This is considered a serious gap, given the evidence of pesticide-related poisoning in those regions. Empirical data of drift deposition of a pesticide surrogate, Uranine tracer, were obtained within one of the highest potato producing regions in Colombia. Based on the empirical data, different spatial interpolation techniques i.e. Thiessen, inverse distance squared weighting, co-kriging, pair-copulas and drift curves depending on distance and wind speed were tested and optimized. Results of the best performing spatial interpolation methods, suitable curves to assess mean relative drift and implications on risk assessment studies will be presented.
Assessment of gene order computing methods for Alzheimer's disease
2013-01-01
Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541
Afsar, Baris; Elsurer, Rengin; Soypacaci, Zeki; Kanbay, Mehmet
2016-02-01
Although anthropometric measurements are related with clinical outcomes; these relationships are not universal and differ in some disease states such as in chronic kidney disease (CKD). The current study was aimed to analyze the relationship between height, weight and BMI with hemodynamic and arterial stiffness parameters both in normal and CKD patients separately. This cross-sectional study included 381 patients with (N 226) and without CKD (N 155) with hypertension. Routine laboratory and 24-h urine collection were performed. Augmentation index (Aix) which is the ratio of augmentation pressure to pulse pressure was calculated from the blood pressure waveform after adjusted heart rate at 75 [Aix@75 (%)]. Pulse wave velocity (PWV) is a simple measure of the time taken by the pressure wave to travel over a specific distance. Both [Aix@75 (%)] and PWV which are measures of arterial stiffness were measured by validated oscillometric methods using mobil-O-Graph device. In patients without CKD, height is inversely correlated with [Aix@75 (%)]. Additionally, weight and BMI were positively associated with PWV in multivariate analysis. However, in patients with CKD, weight and BMI were inversely and independently related with PWV. In CKD patients, as weight and BMI increased stiffness parameters such as Aix@75 (%) and PWV decreased. While BMI and weight are positively associated with arterial stiffness in normal patients, this association is negative in patients with CKD. In conclusion, height, weight and BMI relationship with hemodynamic and arterial stiffness parameters differs in patients with and without CKD.
Study on Strata Behavior Regularity of 1301 Face in Thick Bedrock of Wei - qiang Coal Mine
NASA Astrophysics Data System (ADS)
Gu, Shuancheng; Yao, Boyu
2017-09-01
In order to ensure the safe and efficient production of the thick bedrock face, the rule of the strata behavior of the thick bedrock face is discussed through the observation of the strata pressure of the 1301 first mining face in Wei qiang coal mine. The initial face is to press the average distance of 50.75m, the periodic weighting is to press the average distance of 12.1m; during the normal mining period, although the upper roof can not be broken at the same time, but the pressure step is basically the same; the working face for the first weighting and periodical weighting is more obvious to the change of pressure step change, when the pressure of the working face is coming, the stent force increased significantly, but there are still part of the stent work resistance exceeds the rated working resistance, low stability, still need to strengthen management.
Analytic processing of distance.
Dopkins, Stephen; Galyer, Darin
2018-01-01
How does a human observer extract from the distance between two frontal points the component corresponding to an axis of a rectangular reference frame? To find out we had participants classify pairs of small circles, varying on the horizontal and vertical axes of a computer screen, in terms of the horizontal distance between them. A response signal controlled response time. The error rate depended on the irrelevant vertical as well as the relevant horizontal distance between the test circles with the relevant distance effect being larger than the irrelevant distance effect. The results implied that the horizontal distance between the test circles was imperfectly extracted from the overall distance between them. The results supported an account, derived from the Exemplar Based Random Walk model (Nosofsky & Palmieri, 1997), under which distance classification is based on the overall distance between the test circles, with relevant distance being extracted from overall distance to the extent that the relevant and irrelevant axes are differentially weighted so as to reduce the contribution of irrelevant distance to overall distance. The results did not support an account, derived from the General Recognition Theory (Ashby & Maddox, 1994), under which distance classification is based on the relevant distance between the test circles, with the irrelevant distance effect arising because a test circle's perceived location on the relevant axis depends on its location on the irrelevant axis, and with relevant distance being extracted from overall distance to the extent that this dependency is absent. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Z.; Luo, Z.; Zhang, Y.; Guo, F.; He, L.
2018-04-01
A Modulation Transfer Function (MTF)-based fuzzy comprehensive evaluation method was proposed in this paper for the purpose of evaluating high-resolution satellite image quality. To establish the factor set, two MTF features and seven radiant features were extracted from the knife-edge region of image patch, which included Nyquist, MTF0.5, entropy, peak signal to noise ratio (PSNR), average difference, edge intensity, average gradient, contrast and ground spatial distance (GSD). After analyzing the statistical distribution of above features, a fuzzy evaluation threshold table and fuzzy evaluation membership functions was established. The experiments for comprehensive quality assessment of different natural and artificial objects was done with GF2 image patches. The results showed that the calibration field image has the highest quality scores. The water image has closest image quality to the calibration field, quality of building image is a little poor than water image, but much higher than farmland image. In order to test the influence of different features on quality evaluation, the experiment with different weights were tested on GF2 and SPOT7 images. The results showed that different weights correspond different evaluating effectiveness. In the case of setting up the weights of edge features and GSD, the image quality of GF2 is better than SPOT7. However, when setting MTF and PSNR as main factor, the image quality of SPOT7 is better than GF2.
Inferring Admixture Histories of Human Populations Using Linkage Disequilibrium
Loh, Po-Ru; Lipson, Mark; Patterson, Nick; Moorjani, Priya; Pickrell, Joseph K.; Reich, David; Berger, Bonnie
2013-01-01
Long-range migrations and the resulting admixtures between populations have been important forces shaping human genetic diversity. Most existing methods for detecting and reconstructing historical admixture events are based on allele frequency divergences or patterns of ancestry segments in chromosomes of admixed individuals. An emerging new approach harnesses the exponential decay of admixture-induced linkage disequilibrium (LD) as a function of genetic distance. Here, we comprehensively develop LD-based inference into a versatile tool for investigating admixture. We present a new weighted LD statistic that can be used to infer mixture proportions as well as dates with fewer constraints on reference populations than previous methods. We define an LD-based three-population test for admixture and identify scenarios in which it can detect admixture events that previous formal tests cannot. We further show that we can uncover phylogenetic relationships among populations by comparing weighted LD curves obtained using a suite of references. Finally, we describe several improvements to the computation and fitting of weighted LD curves that greatly increase the robustness and speed of the calculations. We implement all of these advances in a software package, ALDER, which we validate in simulations and apply to test for admixture among all populations from the Human Genome Diversity Project (HGDP), highlighting insights into the admixture history of Central African Pygmies, Sardinians, and Japanese. PMID:23410830
Buzzega, Dania; Maccari, Francesca; Volpi, Nicola
2008-11-01
We report the use of fluorophore-assisted carbohydrate electrophoresis (FACE) to determine the molecular mass (M) values of heparins (Heps) and low-molecular-weight (LMW)-Hep derivatives. Hep are labeled with 8-aminonaphthalene-1,3,6-trisulfonic acid and FACE is able to resolve each fraction as a discrete band depending on their M. After densitometric acquisition, the migration distance of each Hep standard is acquired and the third-grade polynomial calibration standard curve is determined by plotting the logarithms of the M values as a function of migration ratio. Purified Hep samples having different properties, pharmaceutical Heps and various LMW-Heps were analyzed by both FACE and conventional high-performance size-exclusion liquid chromatography (HPSEC) methods. The molecular weight value on the top of the chromatographic peak (Mp), the number-average Mn, weight-average Mw and polydispersity (Mw/Mn) were examined by both techniques and found to be similar. This approach offers certain advantages over the HPSEC method. The derivatization process with 8-aminonaphthalene-1,3,6-trisulfonic acid is complete after 4 h so that many samples may be analyzed in a day also considering that multiple samples can be run simultaneously and in parallel and that a single FACE analysis requires approx. 15 min. Furthermore, FACE is a very sensitive method as it requires approx. 5-10 microg of Heps, about 10-100-fold lower than samples and standards used in HPSEC evaluation. Finally, the utilization of mini-gels allows the use of very low amounts of reagents with neither expensive equipment nor any complicated procedures having to be applied. This study demonstrates that FACE analysis is a sensitive method for the determination of the M values of Heps and LMW-Heps with possible utilization in virtually any kind of research and development such as quality control laboratories due to its rapid, parallel analysis of multiple samples by means of common and simple largely used analytical laboratory equipment.
Effect of olympic weight category on performance in the roundhouse kick to the head in taekwondo.
Estevan, Isaac; Falco, Coral; Alvarez, Octavio; Molina-García, Javier
2012-03-01
In taekwondo, kick performance is generally measured using impact force and time. This study aimed to analyse performance in the roundhouse kick to the head according to execution distance between and within Olympic weight categories. The participants were 36 male athletes divided into three categories: featherweight (n = 10), welterweight (n = 15) and heavyweight (n = 11). Our results show that taekwondo athletes in all weight categories generate a similar relative impact force. However, the results indicate that weight has a large impact on kick performance, particularly in relation to total response time.
Effect of Olympic Weight Category on Performance in the Roundhouse Kick to the Head in Taekwondo
Estevan, Isaac; Falco, Coral; Álvarez, Octavio; Molina-García, Javier
2012-01-01
In taekwondo, kick performance is generally measured using impact force and time. This study aimed to analyse performance in the roundhouse kick to the head according to execution distance between and within Olympic weight categories. The participants were 36 male athletes divided into three categories: featherweight (n = 10), welterweight (n = 15) and heavyweight (n = 11). Our results show that taekwondo athletes in all weight categories generate a similar relative impact force. However, the results indicate that weight has a large impact on kick performance, particularly in relation to total response time. PMID:23486074
Effect of sterilization irradiation on friction and wear of ultrahigh-molecular-weight polyethylene
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.; Hady, W. F.; Crugnola, A.
1979-01-01
The effect of sterilization gamma irradiation on the friction and wear properties of ultrahigh molecular weight polyethylene (UHMWPE) sliding against 316L stainless steel in dry air at 23 C was determined. A pin-on-disk apparatus was used. Experimental conditions included a 1-kilogram load, a 0.061- to 0.27-meter-per-second sliding velocity, and a 32000- to 578000-meter sliding distance. Although sterilization doses of 2.5 and 5.0 megarads greatly altered the average molecular weight and the molecular weight distribution, the friction and wear properties of the polymer were not significantly changed.
Rüst, Christoph Alexander; Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald
2011-01-01
Background The purpose of this study was to define predictor variables for recreational male Ironman triathletes, using age and basic measurements of anthropometry, training, and previous performance to establish an equation for the prediction of an Ironman race time for future recreational male Ironman triathletes. Methods Age and anthropometry, training, and previous experience variables were related to Ironman race time using bivariate and multivariate analysis. Results A total of 184 recreational male triathletes, of mean age 40.9 ± 8.4 years, height 1.80 ± 0.06 m, and weight 76.3 ± 8.4 kg completed the Ironman within 691 ± 83 minutes. They spent 13.9 ± 5.0 hours per week in training, covering 6.3 ± 3.1 km of swimming, 194.4 ± 76.6 km of cycling, and 45.0 ± 15.9 km of running. In total, 149 triathletes had completed at least one marathon, and 150 athletes had finished at least one Olympic distance triathlon. They had a personal best time of 130.4 ± 44.2 minutes in an Olympic distance triathlon and of 193.9 ± 31.9 minutes in marathon running. In total, 126 finishers had completed both an Olympic distance triathlon and a marathon. After multivariate analysis, both a personal best time in a marathon (P < 0.0001) and in an Olympic distance triathlon (P < 0.0001) were the best variables related to Ironman race time. Ironman race time (minutes) might be partially predicted by the following equation: (r2 = 0.65, standard error of estimate = 56.8) = 152.1 + 1.332 × (personal best time in a marathon, minutes) + 1.964 × (personal best time in an Olympic distance triathlon, minutes). Conclusion These results suggest that, in contrast with anthropometric and training characteristics, both the personal best time in an Olympic distance triathlon and in a marathon predict Ironman race time in recreational male Ironman triathletes. PMID:24198578
Michimi, Akihiko; Wimberly, Michael C
2010-10-08
Limited access to supermarkets may reduce consumption of healthy foods, resulting in poor nutrition and increased prevalence of obesity. Most studies have focused on accessibility of supermarkets in specific urban settings or localized rural communities. Less is known, however, about how supermarket accessibility is associated with obesity and healthy diet at the national level and how these associations differ in urban versus rural settings. We analyzed data on obesity and fruit and vegetable (F/V) consumption from the Behavioral Risk Factor Surveillance System for 2000-2006 at the county level. We used 2006 Census Zip Code Business Patterns data to compute population-weighted mean distance to supermarket at the county level for different sizes of supermarket. Multilevel logistic regression models were developed to test whether population-weighted mean distance to supermarket was associated with both obesity and F/V consumption and to determine whether these relationships varied for urban (metropolitan) versus rural (nonmetropolitan) areas. Distance to supermarket was greater in nonmetropolitan than in metropolitan areas. The odds of obesity increased and odds of consuming F/V five times or more per day decreased as distance to supermarket increased in metropolitan areas for most store size categories. In nonmetropolitan areas, however, distance to supermarket had no associations with obesity or F/V consumption for all supermarket size categories. Obesity prevalence increased and F/V consumption decreased with increasing distance to supermarket in metropolitan areas, but not in nonmetropolitan areas. These results suggest that there may be a threshold distance in nonmetropolitan areas beyond which distance to supermarket no longer impacts obesity and F/V consumption. In addition, obesity and food environments in nonmetropolitan areas are likely driven by a more complex set of social, cultural, and physical factors than a single measure of supermarket accessibility. Future research should attempt to more precisely quantify the availability and affordability of foods in nonmetropolitan areas and consider alternative sources of healthy foods besides supermarkets.
2010-01-01
Background Limited access to supermarkets may reduce consumption of healthy foods, resulting in poor nutrition and increased prevalence of obesity. Most studies have focused on accessibility of supermarkets in specific urban settings or localized rural communities. Less is known, however, about how supermarket accessibility is associated with obesity and healthy diet at the national level and how these associations differ in urban versus rural settings. We analyzed data on obesity and fruit and vegetable (F/V) consumption from the Behavioral Risk Factor Surveillance System for 2000-2006 at the county level. We used 2006 Census Zip Code Business Patterns data to compute population-weighted mean distance to supermarket at the county level for different sizes of supermarket. Multilevel logistic regression models were developed to test whether population-weighted mean distance to supermarket was associated with both obesity and F/V consumption and to determine whether these relationships varied for urban (metropolitan) versus rural (nonmetropolitan) areas. Results Distance to supermarket was greater in nonmetropolitan than in metropolitan areas. The odds of obesity increased and odds of consuming F/V five times or more per day decreased as distance to supermarket increased in metropolitan areas for most store size categories. In nonmetropolitan areas, however, distance to supermarket had no associations with obesity or F/V consumption for all supermarket size categories. Conclusions Obesity prevalence increased and F/V consumption decreased with increasing distance to supermarket in metropolitan areas, but not in nonmetropolitan areas. These results suggest that there may be a threshold distance in nonmetropolitan areas beyond which distance to supermarket no longer impacts obesity and F/V consumption. In addition, obesity and food environments in nonmetropolitan areas are likely driven by a more complex set of social, cultural, and physical factors than a single measure of supermarket accessibility. Future research should attempt to more precisely quantify the availability and affordability of foods in nonmetropolitan areas and consider alternative sources of healthy foods besides supermarkets. PMID:20932312
A Weighted Least Squares Approach To Robustify Least Squares Estimates.
ERIC Educational Resources Information Center
Lin, Chowhong; Davenport, Ernest C., Jr.
This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…
Is Hefting to Perceive the Affordance for Throwing a Smart Perceptual Mechanism?
ERIC Educational Resources Information Center
Zhu, Qin; Bingham, Geoffrey P.
2008-01-01
G. P. Bingham, R. C. Schmidt, and L. D. Rosenblum (1989) found that, by hefting objects of different sizes and weights, people could choose the optimal weight in each size for throwing to a maximum distance. In Experiment 1, the authors replicated this result. G. P. Bingham et al. hypothesized that hefting is a smart mechanism that allows objects…
Effect of Long-Term Physical Activity Practice after Cardiac Rehabilitation on Some Risk Factors
ERIC Educational Resources Information Center
Freyssin, Celine, Jr.; Blanc, Philippe; Verkindt, Chantal; Maunier, Sebastien; Prieur, Fabrice
2011-01-01
The objective of this study was to evaluate the effects of long-term physical activity practice after a cardiac rehabilitation program on weight, physical capacity and arterial compliance. The Dijon Physical Activity Score was used to identify two groups: sedentary and active. Weight, distance at the 6-min walk test and the small artery elasticity…
Is hefting to perceive the affordance for throwing a smart perceptual mechanism?
Zhu, Qin; Bingham, Geoffrey P
2008-08-01
G. P. Bingham, R. C. Schmidt, and L. D. Rosenblum (1989) found that, by hefting objects of different sizes and weights, people could choose the optimal weight in each size for throwing to a maximum distance. In Experiment 1, the authors replicated this result. G. P. Bingham et al. hypothesized that hefting is a smart mechanism that allows objects to be perceived in the context of throwing dynamics. This hypothesis entails 2 assumptions. First, hefting by hand is required for information about throwing by hand. The authors tested and confirmed this in Experiments 2 and 3. Second, optimal objects are determined by the dynamics of throwing. In Experiment 4, the authors tested this by measuring throwing release angles and using them with mean thrown distances from Experiment 1 and object sizes and weights to simulate projectile motion and recover release velocities. The results showed that only weight, not size, affects throwing. This failed to provide evidence supporting the particular smart mechanism hypothesis of G. P. Bingham et al. Because the affordance relation is determined in part by the dynamics of projectile motion, the results imply that the affordance is learned from knowledge of results of throwing.
Jensen, Tina Kold; Frederiksen, Hanne; Kyhl, Henriette Boye; Lassen, Tina Harmer; Swan, Shanna H.; Bornehag, Carl-Gustaf; Skakkebaek, Niels E.; Main, Katharina M.; Lind, Dorte Vesterholm; Husby, Steffen; Andersson, Anna-Maria
2015-01-01
Background: Phthalates comprise a large class of chemicals used in a variety of consumer products. Several have anti-androgenic properties, and in rodents prenatal exposure has been associated with reduced anogenital distance (AGD)—the distance from the anus to the genitals in male offspring. Few human studies have been conducted, but associations between the anti-androgenic phthalates and male AGD have been reported. Objective: We aimed to study the association between phthalate exposure in late pregnancy in Danish women pregnant in 2010–2012 and AGD in their male infants at 3 months of age (n = 273). Methods: In the Odense child cohort study, urinary concentrations of 12 phthalate metabolites of diethyl, di-n-butyl, diisobutyl, di(2-ethylhexyl), butylbenzyl, and diisononyl phthalate (DEP, DnBP, DiBP, DEHP, BBzP, and DiNP, respectively) were measured among 245 mothers of boys at approximately gestational week 28 (range, 20.4–30.4) and adjusted for osmolality. AGD, penile width, and weight were measured 3 months after the expected date of birth. Associations between prenatal phthalate and AGD and penile width were estimated using multivariable linear regression adjusting for age and weight-for-age standard deviation score. Results: Phthalate levels were lower in this population than in a recent Swedish study in which phthalates were measured in the first trimester. No consistent associations were seen between any prenatal phthalate and AGD or penile width. Most associations were negative for exposures above the first quartile, and for ln-transformed exposures modeled as continuous variables, but there were no consistent dose–response patterns, and associations were not statistically significant (p > 0.05). Conclusion: We found no significant trends towards shorter AGD in boys with higher phthalates exposures in this low exposed Danish population. Citation: Jensen TK, Frederiksen H, Kyhl HB, Lassen TH, Swan SH, Bornehag CG, Skakkebaek NE, Main KM, Lind DV, Husby S, Andersson AM. 2016. Prenatal exposure to phthalates and anogenital distance in male infants from a low-exposed Danish cohort (2010–2012). Environ Health Perspect 124:1107–1113; http://dx.doi.org/10.1289/ehp.1509870 PMID:26672060
Parks, Sean A; McKelvey, Kevin S; Schwartz, Michael K
2013-02-01
The importance of movement corridors for maintaining connectivity within metapopulations of wild animals is a cornerstone of conservation. One common approach for determining corridor locations is least-cost corridor (LCC) modeling, which uses algorithms within a geographic information system to search for routes with the lowest cumulative resistance between target locations on a landscape. However, the presentation of multiple LCCs that connect multiple locations generally assumes all corridors contribute equally to connectivity, regardless of the likelihood that animals will use them. Thus, LCCs may overemphasize seldom-used longer routes and underemphasize more frequently used shorter routes. We hypothesize that, depending on conservation objectives and available biological information, weighting individual corridors on the basis of species-specific movement, dispersal, or gene flow data may better identify effective corridors. We tested whether locations of key connectivity areas, defined as the highest 75th and 90th percentile cumulative weighted value of approximately 155,000 corridors, shift under different weighting scenarios. In addition, we quantified the amount and location of private land that intersect key connectivity areas under each weighting scheme. Some areas that appeared well connected when analyzed with unweighted corridors exhibited much less connectivity compared with weighting schemes that discount corridors with large effective distances. Furthermore, the amount and location of key connectivity areas that intersected private land varied among weighting schemes. We believe biological assumptions and conservation objectives should be explicitly incorporated to weight corridors when assessing landscape connectivity. These results are highly relevant to conservation planning because on the basis of recent interest by government agencies and nongovernmental organizations in maintaining and enhancing wildlife corridors, connectivity will likely be an important criterion for prioritization of land purchases and swaps. ©2012 Society for Conservation Biology.
Handwritten document age classification based on handwriting styles
NASA Astrophysics Data System (ADS)
Ramaiah, Chetan; Kumar, Gaurav; Govindaraju, Venu
2012-01-01
Handwriting styles are constantly changing over time. We approach the novel problem of estimating the approximate age of Historical Handwritten Documents using Handwriting styles. This system will have many applications in handwritten document processing engines where specialized processing techniques can be applied based on the estimated age of the document. We propose to learn a distribution over styles across centuries using Topic Models and to apply a classifier over weights learned in order to estimate the approximate age of the documents. We present a comparison of different distance metrics such as Euclidean Distance and Hellinger Distance within this application.
Threshold selection for classification of MR brain images by clustering method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less
1981-02-01
the runway with 1800 ft. radius and no specified runout distance, was developed circa 1958 and standardized in the 19609. A considerable nur, ber of...cornerinl) . Even wi th eaync);e wheel steering, the small fraction of total weight on thle nose whoel prevents tricycle airplaines from being very...would provide more runout but would require greater clearance travel distances at both Fnds. Z2 The results of reference (a) indicated that
Spatio-Temporal Patterns of the International Merger and Acquisition Network.
Dueñas, Marco; Mastrandrea, Rossana; Barigozzi, Matteo; Fagiolo, Giorgio
2017-09-07
This paper analyses the world web of mergers and acquisitions (M&As) using a complex network approach. We use data of M&As to build a temporal sequence of binary and weighted-directed networks for the period 1995-2010 and 224 countries (nodes) connected according to their M&As flows (links). We study different geographical and temporal aspects of the international M&A network (IMAN), building sequences of filtered sub-networks whose links belong to specific intervals of distance or time. Given that M&As and trade are complementary ways of reaching foreign markets, we perform our analysis using statistics employed for the study of the international trade network (ITN), highlighting the similarities and differences between the ITN and the IMAN. In contrast to the ITN, the IMAN is a low density network characterized by a persistent giant component with many external nodes and low reciprocity. Clustering patterns are very heterogeneous and dynamic. High-income economies are the main acquirers and are characterized by high connectivity, implying that most countries are targets of a few acquirers. Like in the ITN, geographical distance strongly impacts the structure of the IMAN: link-weights and node degrees have a non-linear relation with distance, and an assortative pattern is present at short distances.
Euclidean chemical spaces from molecular fingerprints: Hamming distance and Hempel's ravens.
Martin, Eric; Cao, Eddie
2015-05-01
Molecules are often characterized by sparse binary fingerprints, where 1s represent the presence of substructures and 0s represent their absence. Fingerprints are especially useful for similarity calculations, such as database searching or clustering, generally measuring similarity as the Tanimoto coefficient. In other cases, such as visualization, design of experiments, or latent variable regression, a low-dimensional Euclidian "chemical space" is more useful, where proximity between points reflects chemical similarity. A temptation is to apply principal components analysis (PCA) directly to these fingerprints to obtain a low dimensional continuous chemical space. However, Gower has shown that distances from PCA on bit vectors are proportional to the square root of Hamming distance. Unlike Tanimoto similarity, Hamming similarity (HS) gives equal weight to shared 0s as to shared 1s, that is, HS gives as much weight to substructures that neither molecule contains, as to substructures which both molecules contain. Illustrative examples show that proximity in the corresponding chemical space reflects mainly similar size and complexity rather than shared chemical substructures. These spaces are ill-suited for visualizing and optimizing coverage of chemical space, or as latent variables for regression. A more suitable alternative is shown to be Multi-dimensional scaling on the Tanimoto distance matrix, which produces a space where proximity does reflect structural similarity.
ERIC Educational Resources Information Center
Syed, Mahbubur Rahman, Ed.
2009-01-01
The emerging field of advanced distance education delivers academic courses across time and distance, allowing educators and students to participate in a convenient learning method. "Methods and Applications for Advancing Distance Education Technologies: International Issues and Solutions" demonstrates communication technologies, intelligent…
Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia
2015-04-01
In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.
Ground Magnetic Data for West-Central Colorado
Richard Zehner
2012-03-08
Modeled ground magnetic data was extracted from the Pan American Center for Earth and Environmental Studies database at http://irpsrvgis08.utep.edu/viewers/Flex/GravityMagnetic/GravityMagnetic_CyberShare/ on 2/29/2012. The downloaded text file was then imported into an Excel spreadsheet. This spreadsheet data was converted into an ESRI point shapefile in UTM Zone 13 NAD27 projection, showing location and magnetic field strength in nano-Teslas. This point shapefile was then interpolated to an ESRI grid using an inverse-distance weighting method, using ESRI Spatial Analyst. The grid was used to create a contour map of magnetic field strength.
Importance of curvature evaluation scale for predictive simulations of dynamic gas-liquid interfaces
NASA Astrophysics Data System (ADS)
Owkes, Mark; Cauble, Eric; Senecal, Jacob; Currie, Robert A.
2018-07-01
The effect of the scale used to compute the interfacial curvature on the prediction of dynamic gas-liquid interfaces is investigated. A new interface curvature calculation methodology referred to herein as the Adjustable Curvature Evaluation Scale (ACES) is proposed. ACES leverages a weighted least squares regression to fit a polynomial through points computed on the volume-of-fluid representation of the gas-liquid interface. The interface curvature is evaluated from this polynomial. Varying the least squares weight with distance from the location where the curvature is being computed, adjusts the scale the curvature is evaluated on. ACES is verified using canonical static test cases and compared against second- and fourth-order height function methods. Simulations of dynamic interfaces, including a standing wave and oscillating droplet, are performed to assess the impact of the curvature evaluation scale for predicting interface motions. ACES and the height function methods are combined with two different unsplit geometric volume-of-fluid (VoF) schemes that define the interface on meshes with different levels of refinement. We find that the results depend significantly on curvature evaluation scale. Particularly, the ACES scheme with a properly chosen weight function is accurate, but fails when the scale is too small or large. Surprisingly, the second-order height function method is more accurate than the fourth-order variant for the dynamic tests even though the fourth-order method performs better for static interfaces. Comparing the curvature evaluation scale of the second- and fourth-order height function methods, we find the second-order method is closer to the optimum scale identified with ACES. This result suggests that the curvature scale is driving the accuracy of the dynamics. This work highlights the importance of studying numerical methods with realistic (dynamic) test cases and that the interactions of the various discretizations is as important as the accuracy of one part of the discretization.
Peng, Song; Zhang, Lian; Hu, Liang; Chen, Jinyun; Ju, Jin; Wang, Xi; Zhang, Rong; Wang, Zhibiao; Chen, Wenzhi
2015-04-01
The aim of this article is to analyze factors affecting sonication dose and build a dosimetry model of high-intensity focused ultrasound (HIFU) ablation for uterine fibroids. Four hundred and three patients with symptomatic uterine fibroids who underwent HIFU were retrospectively analyzed. The energy efficiency factor (EEF) was set as dependent variable, and the factors possibly affecting sonication dose included age, body mass index, size of uterine fibroid, abdominal wall thickness, the distance from uterine fibroid dorsal side to sacrum, the distance from uterine fibroid ventral side to skin, location of uterus, location of uterine fibroids, type of uterine fibroids, abdominal wall scar, signal intensity on T2-weighted imaging (T2WI), and enhancement type on T1-weighted imaging (T1WI) were set as predictors to build a multiple regression model. The size of uterine fibroid, distance from fibroid ventral side to skin, location of uterus, location of uterine fibroids, type of uterine fibroids, signal intensity on T2WI, and enhancement type on T1WI had a linear correlation with EEF. The distance from fibroid ventral side to skin, enhancement type on T1WI, size of uterine fibroid, and signal intensity on T2WI were eventually incorporated into the dosimetry model. The distance from fibroid ventral side to skin, enhancement type on T1WI, size of uterine fibroid, and signal intensity on T2WI can be used as dosimetric predictors for HIFU for uterine fibroids.
Rufo, Montaña; Antolín, Alicia; Paniagua, Jesús M; Jiménez, Antonio
2018-04-01
A comparative study was made of three methods of interpolation - inverse distance weighting (IDW), spline and ordinary kriging - after optimization of their characteristic parameters. These interpolation methods were used to represent the electric field levels for three emission frequencies (774kHz, 900kHz, and 1107kHz) and for the electrical stimulation quotient, Q E , characteristic of complex electromagnetic environments. Measurements were made with a spectrum analyser in a village in the vicinity of medium-wave radio broadcasting antennas. The accuracy of the models was quantified by comparing their predictions with levels measured at the control points not used to generate the models. The results showed that optimizing the characteristic parameters of each interpolation method allows any of them to be used. However, the best results in terms of the regression coefficient between each model's predictions and the actual control point field measurements were for the IDW method. Copyright © 2018 Elsevier Inc. All rights reserved.
Becher, Christoph; Fleischer, Benjamin; Rase, Marten; Schumacher, Thees; Ettinger, Max; Ostermeier, Sven; Smith, Tomas
2017-08-01
This study analysed the effects of upright weight bearing and the knee flexion angle on patellofemoral indices, determined using magnetic resonance imaging (MRI), in patients with patellofemoral instability (PI). Healthy volunteers (control group, n = 9) and PI patients (PI group, n = 16) were scanned in an open-configuration MRI scanner during upright weight bearing and supine non-weight bearing positions at full extension (0° flexion) and at 15°, 30°, and 45° flexion. Patellofemoral indices included the Insall-Salvati Index, Caton-Deschamp Index, and Patellotrochlear Index (PTI) to determine patellar height and the patellar tilt angle (PTA), bisect offset (BO), and the tibial tubercle-trochlear groove (TT-TG) distance to assess patellar rotation and translation with respect to the femur and alignment of the extensor mechanism. A significant interaction effect of weight bearing by flexion angle was observed for the PTI, PTA, and BO for subjects with PI. At full extension, post hoc pairwise comparisons revealed a significant effect of weight bearing on the indices, with increased patellar height and increased PTA and BO in the PI group. Except for the BO, no such changes were seen in the control group. Independent of weight bearing, flexing the knee caused the PTA, BO, and TT-TG distance to be significantly reduced. Upright weight bearing and the knee flexion angle affected patellofemoral MRI indices in PI patients, with significantly increased values at full extension. The observations of this study provide a caution to be considered by professionals when treating PI patients. These patients should be evaluated clinically and radiographically at full extension and various flexion angles in context with quadriceps engagement. Explorative case-control study, Level III.
Wang, W; Ma, C Y; Chen, W; Ma, H Y; Zhang, H; Meng, Y Y; Ni, Y; Ma, L B
2016-08-19
Determining correlations between certain traits of economic importance constitutes an essential component of selective activities. In this study, our aim was to provide effective indicators for breeding programs of Lateolabrax maculatus, an important aquaculture species in China. We analyzed correlations between 20 morphometric traits and body weight, using correlation and path analyses. The results indicated that the correlations among all 21 traits were highly significant, with the highest correlation coefficient identified between total length and body weight. The path analysis indicated that total length (X 1 ), body width (X 5 ), distance from first dorsal fin origin to anal fin origin (X 10 ), snout length (X 16 ), eye diameter (X 17 ), eye cross (X 18 ), and slanting distance from snout tip to first dorsal fin origin (X 19 ) significantly affected body weight (Y) directly. The following multiple-regression equation was obtained using stepwise multiple-regression analysis: Y = -472.108 + 1.065X 1 + 7.728X 5 + 1.973X 10 - 7.024X 16 - 4.400X 17 - 3.338X 18 + 2.138X 19 , with an adjusted multiple-correlation coefficient of 0.947. Body width had the largest determinant coefficient, as well as the highest positive direct correlation with body weight. At the same time, high indirect effects with six other morphometric traits on L. maculatus body weight, through body width, were identified. Hence, body width could be a key factor that efficiently indicates significant effects on body weight in L. maculatus.
Partial differential equation-based localization of a monopole source from a circular array.
Ando, Shigeru; Nara, Takaaki; Levy, Tsukassa
2013-10-01
Wave source localization from a sensor array has long been the most active research topics in both theory and application. In this paper, an explicit and time-domain inversion method for the direction and distance of a monopole source from a circular array is proposed. The approach is based on a mathematical technique, the weighted integral method, for signal/source parameter estimation. It begins with an exact form of the source-constraint partial differential equation that describes the unilateral propagation of wide-band waves from a single source, and leads to exact algebraic equations that include circular Fourier coefficients (phase mode measurements) as their coefficients. From them, nearly closed-form, single-shot and multishot algorithms are obtained that is suitable for use with band-pass/differential filter banks. Numerical evaluation and several experimental results obtained using a 16-element circular microphone array are presented to verify the validity of the proposed method.
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
Spatial analysis of lettuce downy mildew using geostatistics and geographic information systems.
Wu, B M; van Bruggen, A H; Subbarao, K V; Pennings, G G
2001-02-01
ABSTRACT The epidemiology of lettuce downy mildew has been investigated extensively in coastal California. However, the spatial patterns of the disease and the distance that Bremia lactucae spores can be transported have not been determined. During 1995 to 1998, we conducted several field- and valley-scale surveys to determine spatial patterns of this disease in the Salinas valley. Geostatistical analyses of the survey data at both scales showed that the influence range of downy mildew incidence at one location on incidence at other locations was between 80 and 3,000 m. A linear relationship was detected between semivariance and lag distance at the field scale, although no single statistical model could fit the semi-variograms at the valley scale. Spatial interpolation by the inverse distance weighting method with a power of 2 resulted in plausible estimates of incidence throughout the valley. Cluster analysis in geographic information systems on the interpolated disease incidence from different dates demonstrated that the Salinas valley could be divided into two areas, north and south of Salinas City, with high and low disease pressure, respectively. Seasonal and spatial trends along the valley suggested that the distinction between the downy mildew conducive and nonconducive areas might be determined by environmental factors.
Skeletonization with hollow detection on gray image by gray weighted distance transform
NASA Astrophysics Data System (ADS)
Bhattacharya, Prabir; Qian, Kai; Cao, Siqi; Qian, Yi
1998-10-01
A skeletonization algorithm that could be used to process non-uniformly distributed gray-scale images with hollows was presented. This algorithm is based on the Gray Weighted Distance Transformation. The process includes a preliminary phase of investigation in the hollows in the gray-scale image, whether these hollows are considered as topological constraints for the skeleton structure depending on their statistically significant depth. We then extract the resulting skeleton that has certain meaningful information for understanding the object in the image. This improved algorithm can overcome the possible misinterpretation of some complicated images in the extracted skeleton, especially in images with asymmetric hollows and asymmetric features. This algorithm can be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.
Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance.
Yuan, Yading; Chao, Ming; Lo, Yeh-Chi
2017-09-01
Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.
NASA Astrophysics Data System (ADS)
Tian, Yunfeng; Shen, Zheng-Kang
2016-02-01
We develop a spatial filtering method to remove random noise and extract the spatially correlated transients (i.e., common-mode component (CMC)) that deviate from zero mean over the span of detrended position time series of a continuous Global Positioning System (CGPS) network. The technique utilizes a weighting scheme that incorporates two factors—distances between neighboring sites and their correlations of long-term residual position time series. We use a grid search algorithm to find the optimal thresholds for deriving the CMC that minimizes the root-mean-square (RMS) of the filtered residual position time series. Comparing to the principal component analysis technique, our method achieves better (>13% on average) reduction of residual position scatters for the CGPS stations in western North America, eliminating regional transients of all spatial scales. It also has advantages in data manipulation: less intervention and applicable to a dense network of any spatial extent. Our method can also be used to detect CMC irrespective of its origins (i.e., tectonic or nontectonic), if such signals are of particular interests for further study. By varying the filtering distance range, the long-range CMC related to atmospheric disturbance can be filtered out, uncovering CMC associated with transient tectonic deformation. A correlation-based clustering algorithm is adopted to identify stations cluster that share the common regional transient characteristics.
Lenzi, Mauro; Finoia, Maria Grazia; Gennaro, Paola; Mercatali, Isabel; Persia, Emma; Solari, Jacopo; Porrello, Salvatore
2013-07-15
Harvesting of macroalgae by specially equipped boats in a shallow eutrophic lagoon produces evident sediment resuspension. To outline the environmental effects of this disturbance, we examined the quantity of fall-out and the distances travelled by sediment and macronutrients from the source of boat disturbance. Resuspended sediment fall-out (RSFO) was trapped at different distances from the boat path to determine total dry weight, total nitrogen (TN), total carbon (TC), total organic carbon (TOC), total sulphur (TS) and total phosphorus (TP). The data was analysed by principal components analysis (PCA) and linear discriminant analysis (LDA) on PCA factors. Fall-out of C, N, S and P from the plume of resuspended sediment indicated significant re-arrangement of these nutrients: RSFO dry weight and S content decreased with distance from the boat path, whereas TP increased and was the variable responsible for most discrimination at 100 m. The mass of resuspended matter was relatively large, indicating that the boats considerably reshuffle lagoon sediment. Copyright © 2013 Elsevier Ltd. All rights reserved.
Clarke, Ralph T; Liley, Durwyn; Sharp, Joanna M; Green, Rhys E
2013-01-01
Substantial new housing and infrastructure development planned within England has the potential to conflict with the nature conservation interests of protected sites. The Breckland area of eastern England (the Brecks) is designated as a Special Protection Area for a number of bird species, including the stone curlew (for which it holds more than 60% of the UK total population). We explore the effect of buildings and roads on the spatial distribution of stone curlew nests across the Brecks in order to inform strategic development plans to avoid adverse effects on such European protected sites. Using data across all years (and subsets of years) over the period 1988-2006 but restricted to habitat areas of arable land with suitable soils, we assessed nest density in relation to the distances to nearest settlements and to major roads. Measures of the local density of nearby buildings, roads and traffic levels were assessed using normal kernel distance-weighting functions. Quasi-Poisson generalised linear mixed models allowing for spatial auto-correlation were fitted. Significantly lower densities of stone curlew nests were found at distances up to 1500m from settlements, and distances up to 1000m or more from major (trunk) roads. The best fitting models involved optimally distance-weighted variables for the extent of nearby buildings and the trunk road traffic levels. The results and predictions from this study of past data suggests there is cause for concern that future housing development and associated road infrastructure within the Breckland area could have negative impacts on the nesting stone curlew population. Given the strict legal protection afforded to the SPA the planning and conservation bodies have subsequently agreed precautionary restrictions on building development within the distances identified and used the modelling predictions to agree mitigation measures for proposed trunk road developments.
Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan
2018-03-19
Our brain integrates information from multiple modalities in the control of behavior. When information from one sensory source is compromised, information from another source can compensate for the loss. What is not clear is whether the nature of this multisensory integration and the re-weighting of different sources of sensory information are the same across different control systems. Here, we investigated whether proprioceptive distance information (position sense of body parts) can compensate for the loss of visual distance cues that support size constancy in perception (mediated by the ventral visual stream) [1, 2] versus size constancy in grasping (mediated by the dorsal visual stream) [3-6], in which the real-world size of an object is computed despite changes in viewing distance. We found that there was perfect size constancy in both perception and grasping in a full-viewing condition (lights on, binocular viewing) and that size constancy in both tasks was dramatically disrupted in the restricted-viewing condition (lights off; monocular viewing of the same but luminescent object through a 1-mm pinhole). Importantly, in the restricted-viewing condition, proprioceptive cues about viewing distance originating from the non-grasping limb (experiment 1) or the inclination of the torso and/or the elbow angle of the grasping limb (experiment 2) compensated for the loss of visual distance cues to enable a complete restoration of size constancy in grasping but only a modest improvement of size constancy in perception. This suggests that the weighting of different sources of sensory information varies as a function of the control system being used. Copyright © 2018 Elsevier Ltd. All rights reserved.
Spatial interpolation of monthly mean air temperature data for Latvia
NASA Astrophysics Data System (ADS)
Aniskevich, Svetlana
2016-04-01
Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.
Single-Image Distance Measurement by a Smart Mobile Device.
Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling
2017-12-01
Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.
Mueller, Noel T; Shin, Hakdong; Pizoni, Aline; Werlang, Isabel C; Matte, Ursula; Goldani, Marcelo Z; Goldani, Helena A S; Dominguez-Bello, Maria Gloria
2016-04-01
The intestinal microbiome is a unique ecosystem that influences metabolism in humans. Experimental evidence indicates that intestinal microbiota can transfer an obese phenotype from humans to mice. Since mothers transmit intestinal microbiota to their offspring during labor, we hypothesized that among vaginal deliveries, maternal body mass index is associated with neonatal gut microbiota composition. We report the association of maternal pre-pregnancy body mass index on stool microbiota from 74 neonates, 18 born vaginally (5 to overweight or obese mothers) and 56 by elective C-section (26 to overweight or obese mothers). Compared to neonates delivered vaginally to normal weight mothers, neonates born to overweight or obese mothers had a distinct gut microbiota community structure (weighted UniFrac distance PERMANOVA, p < 0.001), enriched in Bacteroides and depleted in Enterococcus, Acinetobacter, Pseudomonas, and Hydrogenophilus. We show that these microbial signatures are predicted to result in functional differences in metabolic signaling and energy regulation. In contrast, among elective Cesarean deliveries, maternal body mass index was not associated with neonatal gut microbiota community structure (weighted UniFrac distance PERMANOVA, p = 0.628). Our findings indicate that excess maternal pre-pregnancy weight is associated with differences in neonatal acquisition of microbiota during vaginal delivery, but not Cesarean delivery. These differences may translate to altered maintenance of metabolic health in the offspring.
Mueller, Noel T.; Shin, Hakdong; Pizoni, Aline; Werlang, Isabel C.; Matte, Ursula; Goldani, Marcelo Z.; Goldani, Helena A. S.; Dominguez-Bello, Maria Gloria
2016-01-01
The intestinal microbiome is a unique ecosystem that influences metabolism in humans. Experimental evidence indicates that intestinal microbiota can transfer an obese phenotype from humans to mice. Since mothers transmit intestinal microbiota to their offspring during labor, we hypothesized that among vaginal deliveries, maternal body mass index is associated with neonatal gut microbiota composition. We report the association of maternal pre-pregnancy body mass index on stool microbiota from 74 neonates, 18 born vaginally (5 to overweight or obese mothers) and 56 by elective C-section (26 to overweight or obese mothers). Compared to neonates delivered vaginally to normal weight mothers, neonates born to overweight or obese mothers had a distinct gut microbiota community structure (weighted UniFrac distance PERMANOVA, p < 0.001), enriched in Bacteroides and depleted in Enterococcus, Acinetobacter, Pseudomonas, and Hydrogenophilus. We show that these microbial signatures are predicted to result in functional differences in metabolic signaling and energy regulation. In contrast, among elective Cesarean deliveries, maternal body mass index was not associated with neonatal gut microbiota community structure (weighted UniFrac distance PERMANOVA, p = 0.628). Our findings indicate that excess maternal pre-pregnancy weight is associated with differences in neonatal acquisition of microbiota during vaginal delivery, but not Cesarean delivery. These differences may translate to altered maintenance of metabolic health in the offspring. PMID:27033998
Rühm, W; Walsh, L
2007-01-01
Currently, most analyses of the A-bomb survivors' solid tumour and leukaemia data are based on a constant neutron relative biological effectiveness (RBE) value of 10 that is applied to all survivors, independent of their distance to the hypocentre at the time of bombing. The results of these analyses are then used as a major basis for current risk estimates suggested by the International Commission on Radiological Protection (ICRP) for use in international safety guidelines. It is shown here that (i) a constant value of 10 is not consistent with weighting factors recommended by the ICRP for neutrons and (ii) it does not account for the hardening of the neutron spectra in Hiroshima and Nagasaki, which takes place with increasing distance from the hypocentres. The purpose of this paper is to present new RBE values for the neutrons, calculated as a function of distance from the hypocentres for both cities that are consistent with the ICRP60 neutron weighting factor. If based on neutron spectra from the DS86 dosimetry system, these calculations suggest values of about 31 at 1000 m and 23 at 2000 m ground range in Hiroshima, while the corresponding values for Nagasaki are 24 and 22. If the neutron weighting factor that is consistent with ICRP92 is used, the corresponding values are about 23 and 21 for Hiroshima and 21 and 20 for Nagasaki, respectively. It is concluded that the current risk estimates will be subject to some changes in view of the changed RBE values. This conclusion does not change significantly if the new doses from the Dosimetry System DS02 are used.
Buzzega, Dania; Maccari, Francesca; Volpi, Nicola
2010-03-11
Fluorophore-assisted carbohydrate electrophoresis (FACE) was applied to determine the molecular mass (M) values of various chondroitin sulfate (CS) samples. After labeling with 8-aminonaphthalene-1,3,6-trisulfonic acid (ANTS), FACE was able to resolve each CS sample as a discrete band depending on the M value. After densitometric acquisition, the migration distance of each CS standard was acquired and the third grade polynomial calibration standard curve was determined by plotting the logarithms of the M values as a function of migration ratio. Purified CS samples of different origin and the European Pharmacopeia CS standard were analyzed by both FACE and conventional high-performance size-exclusion liquid chromatography (HPSEC) methods. The molecular weight value on the top of the chromatographic peak (M(p)), the number-average M(n), weight-average M(w), and polydispersity (M(w)/M(n)) were examined by both techniques and found to be quite similar. This study demonstrates that FACE analysis is a suitable, sensitive and simple method for the determination of the M values of CS macromolecules with possible utilization in virtually any kind of research and development such as quality control laboratories. Copyright 2009 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, H; Lee, Y; Ruschin, M
2015-06-15
Purpose: Automatically derive electron density of tissues using MR images and generate a pseudo-CT for MR-only treatment planning of brain tumours. Methods: 20 stereotactic radiosurgery (SRS) patients’ T1-weighted MR images and CT images were retrospectively acquired. First, a semi-automated tissue segmentation algorithm was developed to differentiate tissues with similar MR intensities and large differences in electron densities. The method started with approximately 12 slices of manually contoured spatial regions containing sinuses and airways, then air, bone, brain, cerebrospinal fluid (CSF) and eyes were automatically segmented using edge detection and anatomical information including location, shape, tissue uniformity and relative intensity distribution.more » Next, soft tissues - muscle and fat were segmented based on their relative intensity histogram. Finally, intensities of voxels in each segmented tissue were mapped into their electron density range to generate pseudo-CT by linearly fitting their relative intensity histograms. Co-registered CT was used as a ground truth. The bone segmentations of pseudo-CT were compared with those of co-registered CT obtained by using a 300HU threshold. The average distances between voxels on external edges of the skull of pseudo-CT and CT in three axial, coronal and sagittal slices with the largest width of skull were calculated. The mean absolute electron density (in Hounsfield unit) difference of voxels in each segmented tissues was calculated. Results: The average of distances between voxels on external skull from pseudo-CT and CT were 0.6±1.1mm (mean±1SD). The mean absolute electron density differences for bone, brain, CSF, muscle and fat are 78±114 HU, and 21±8 HU, 14±29 HU, 57±37 HU, and 31±63 HU, respectively. Conclusion: The semi-automated MR electron density mapping technique was developed using T1-weighted MR images. The generated pseudo-CT is comparable to that of CT in terms of anatomical position of tissues and similarity of electron density assignment. This method can allow MR-only treatment planning.« less
Chen, Chia Lin; Lo, Chu Ling; Huang, Kai Chu; Huang, Chen Fu
2017-10-01
[Purpose] The aim of this study was to determine the intrarater reliability of using ultrasonography as a measurement tool to assess the patella position in a weight-bearing condition. [Subjects and Methods] Ten healthy adults participated in this study. Ultrasonography was used to assess the patella position during step down with the loading knee in flexion (0° and 20°). The distance between the patella and lateral condyle was measured to represent the patella position on the condylar groove. Two measurements were obtained on the first day and the day after 1 week by the same investigator. [Results] Excellent intrarater reliability, ranging from 0.83 to 0.93, was shown in both conditions. Standard errors of the measurements were 0.5 mm in the straight knee and 0.7 mm in the knee flexion at 20°. Minimal differences in knee flexion at 0° and knee flexion at 20° were 1.5 mm and 1.9 mm, respectively. [Conclusion] Ultrasonography is a reliable assessment tool for evaluating the positional changes of the patella in weight-bearing activities, and it can be easily used by practitioners in the clinical setting.
Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8
NASA Astrophysics Data System (ADS)
Sison, Virgilio; Remillion, Monica
2017-10-01
Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.
... feelings will help intimacy get better over time. Sports An ostomy should not keep you from being ... distance Lift weights Ski Swim Play most other sports. Ask your provider which sports you can take ...
Clustering of local group distances: Publication bias or correlated measurements? II. M31 and beyond
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Grijs, Richard; Bono, Giuseppe
2014-07-01
The accuracy of extragalactic distance measurements ultimately depends on robust, high-precision determinations of the distances to the galaxies in the local volume. Following our detailed study addressing possible publication bias in the published distance determinations to the Large Magellanic Cloud (LMC), here we extend our distance range of interest to include published distance moduli to M31 and M33, as well as to a number of their well-known dwarf galaxy companions. We aim at reaching consensus on the best, most homogeneous, and internally most consistent set of Local Group distance moduli to adopt for future, more general use based on themore » largest set of distance determinations to individual Local Group galaxies available to date. Based on a careful, statistically weighted combination of the main stellar population tracers (Cepheids, RR Lyrae variables, and the magnitude of the tip of the red-giant branch), we derive a recommended distance modulus to M31 of (m−M){sub 0}{sup M31}=24.46±0.10 mag—adopting as our calibration an LMC distance modulus of (m−M){sub 0}{sup LMC}=18.50 mag—and a fully internally consistent set of benchmark distances to key galaxies in the local volume, enabling us to establish a robust and unbiased, near-field extragalactic distance ladder.« less
Constructing statistically unbiased cortical surface templates using feature-space covariance
NASA Astrophysics Data System (ADS)
Parvathaneni, Prasanna; Lyu, Ilwoo; Huo, Yuankai; Blaber, Justin; Hainline, Allison E.; Kang, Hakmook; Woodward, Neil D.; Landman, Bennett A.
2018-03-01
The choice of surface template plays an important role in cross-sectional subject analyses involving cortical brain surfaces because there is a tendency toward registration bias given variations in inter-individual and inter-group sulcal and gyral patterns. In order to account for the bias and spatial smoothing, we propose a feature-based unbiased average template surface. In contrast to prior approaches, we factor in the sample population covariance and assign weights based on feature information to minimize the influence of covariance in the sampled population. The mean surface is computed by applying the weights obtained from an inverse covariance matrix, which guarantees that multiple representations from similar groups (e.g., involving imaging, demographic, diagnosis information) are down-weighted to yield an unbiased mean in feature space. Results are validated by applying this approach in two different applications. For evaluation, the proposed unbiased weighted surface mean is compared with un-weighted means both qualitatively and quantitatively (mean squared error and absolute relative distance of both the means with baseline). In first application, we validated the stability of the proposed optimal mean on a scan-rescan reproducibility dataset by incrementally adding duplicate subjects. In the second application, we used clinical research data to evaluate the difference between the weighted and unweighted mean when different number of subjects were included in control versus schizophrenia groups. In both cases, the proposed method achieved greater stability that indicated reduced impacts of sampling bias. The weighted mean is built based on covariance information in feature space as opposed to spatial location, thus making this a generic approach to be applicable to any feature of interest.
Estevan, Isaac; Alvarez, Octavio; Falco, Coral; Molina-García, Javier; Castillo, Isabel
2011-10-01
The execution distance is a tactic factor that affects mechanical performance and execution technique in taekwondo. This study analyzes the roundhouse kick to the head by comparing the maximum impact force, execution time, and impact time in 3 distances according to the athletes' competition level. It also analyzes the relationship between impact force and weight in each group. It examines whether the execution distance affects the maximum impact force, execution time, and impact time, in each level group or 2 different competition levels. Participants were 27 male taekwondo players (13 medallists and 14 nonmedallists). The medallists executed the roundhouse kick to the head with greater impact force and in a shorter execution time than did the nonmedallists when they kicked from any distance different to their combat distance. However, the results showed that the execution distance is influential in the execution time and impact time in the nonmedallist group. It is considered appropriate to orientate the high-level competitors to train for offensive actions from any distance similar to the long execution distance because it offers equally effectiveness and a greater security against the opponent. Also, practitioners should focus their training to improve time performance because it is more affected by distance than impact force.
NASA Technical Reports Server (NTRS)
Park, Sang C.; Carnahan, Timothy M.; Cohen, Lester M.; Congedo, Cherie B.; Eisenhower, Michael J.; Ousley, Wes; Weaver, Andrew; Yang, Kan
2017-01-01
The JWST Optical Telescope Element (OTE) assembly is the largest optically stable infrared-optimized telescope currently being manufactured and assembled, and is scheduled for launch in 2018. The JWST OTE, including the 18 segment primary mirror, secondary mirror, and the Aft Optics Subsystem (AOS) are designed to be passively cooled and operate near 45K. These optical elements are supported by a complex composite backplane structure. As a part of the structural distortion model validation efforts, a series of tests are planned during the cryogenic vacuum test of the fully integrated flight hardware at NASA JSC Chamber A. The successful ends to the thermal-distortion phases are heavily dependent on the accurate temperature knowledge of the OTE structural members. However, the current temperature sensor allocations during the cryo-vac test may not have sufficient fidelity to provide accurate knowledge of the temperature distributions within the composite structure. A method based on an inverse distance relationship among the sensors and thermal model nodes was developed to improve the thermal data provided for the nanometer scale WaveFront Error (WFE) predictions. The Linear Distance Weighted Interpolation (LDWI) method was developed to augment the thermal model predictions based on the sparse sensor information. This paper will encompass the development of the LDWI method using the test data from the earlier pathfinder cryo-vac tests, and the results of the notional and as tested WFE predictions from the structural finite element model cases to characterize the accuracies of this LDWI method.
Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter
2016-01-01
Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076
NASA Astrophysics Data System (ADS)
Itatani, Keiichi; Okada, Takashi; Uejima, Tokuhisa; Tanaka, Tomohiko; Ono, Minoru; Miyaji, Kagami; Takenaka, Katsu
2013-07-01
We have developed a system to estimate velocity vector fields inside the cardiac ventricle by echocardiography and to evaluate several flow dynamical parameters to assess the pathophysiology of cardiovascular diseases. A two-dimensional continuity equation was applied to color Doppler data using speckle tracking data as boundary conditions, and the velocity component perpendicular to the echo beam line was obtained. We determined the optimal smoothing method of the color Doppler data, and the 8-pixel standard deviation of the Gaussian filter provided vorticity without nonphysiological stripe shape noise. We also determined the weight function at the bilateral boundaries given by the speckle tracking data of the ventricle or vascular wall motion, and the weight function linear to the distance from the boundary provided accurate flow velocities not only inside the vortex flow but also around near-wall regions on the basis of the results of the validation of a digital phantom of a pipe flow model.
NASA Astrophysics Data System (ADS)
Roy, M.; Maksym, P. A.; Bruls, D.; Offermans, P.; Koenraad, P. M.
2010-11-01
An effective-mass theory of subsurface scanning tunneling microscopy (STM) is developed. Subsurface structures such as quantum dots embedded into a semiconductor slab are considered. States localized around subsurface structures match on to a tail that decays into the vacuum above the surface. It is shown that the lateral variation in this tail may be found from a surface envelope function provided that the effects of the slab surfaces and the subsurface structure decouple approximately. The surface envelope function is given by a weighted integral of a bulk envelope function that satisfies boundary conditions appropriate to the slab. The weight function decays into the slab inversely with distance and this slow decay explains the subsurface sensitivity of STM. These results enable STM images to be computed simply and economically from the bulk envelope function. The method is used to compute wave-function images of cleaved quantum dots and the computed images agree very well with experiment.
Small arms mini-fire control system: fiber-optic barrel deflection sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajic, Slobodan; Datskos, Panos G
Traditionally the methods to increase firearms accuracy, particularly at distance, have concentrated on barrel isolation (free floating) and substantial barrel wall thickening to gain rigidity. This barrel stiffening technique did not completely eliminate barrel movement but the problem was significantly reduced to allow a noticeable accuracy enhancement. This process, although highly successful, came at a very high weight penalty. Obviously the goal would be to lighten the barrel (firearm), yet achieve even greater accuracy. Thus, if lightweight barrels could ultimately be compensated for both their static and dynamic mechanical perturbations, the result would be very accurate, yet significantly lighter weight,more » weapons. We discuss our development of a barrel reference sensor system that is designed to accomplish this ambitious goal. Our optical fiber-based sensor monitors the barrel muzzle position and autonomously compensates for any induced perturbations. The reticle is electronically adjusted in position to compensate for the induced barrel deviation in real time.« less
Dynamics of ultralight aircraft: Dive recovery of hang gliders
NASA Technical Reports Server (NTRS)
Jones, R. T.
1977-01-01
Longitudinal control of a hang glider by weight shift is not always adequate for recovery from a vertical dive. According to Lanchester's phugoid theory, recovery from rest to horizontal flight ought to be possible within a distance equal to three times the height of fall needed to acquire level flight velocity. A hang glider, having a wing loading of 5 kg sq m and capable of developing a lift coefficient of 1.0, should recover to horizontal flight within a vertical distance of about 12 m. The minimum recovery distance can be closely approached if the glider is equipped with a small all-moveable tail surface having sufficient upward deflection.
NASA Astrophysics Data System (ADS)
Zhang, Kun; Zhang, Hu; Song, Qiuzhi
2018-01-01
In this paper, a Single- Idler electronic belt-conveyor scale is the Object of study. The contact force between the belt and the supporting roller is calculated by the finite element analysis software ABAQUS. The relationship between the tension distance of the tension wheel and the contact force between the belt and the weighing roller is obtained. The best stretching distance is found through analysis. And the conclusion which is the weighing error is different at the same stretching distance but the different weight of material is obtained. A compensation mechanism is proposed to improve the weighing accuracy.
Wang, Jinke; Cheng, Yuanzhi; Guo, Changyong; Wang, Yadong; Tamura, Shinichi
2016-05-01
Propose a fully automatic 3D segmentation framework to segment liver on challenging cases that contain the low contrast of adjacent organs and the presence of pathologies from abdominal CT images. First, all of the atlases are weighted in the selected training datasets by calculating the similarities between the atlases and the test image to dynamically generate a subject-specific probabilistic atlas for the test image. The most likely liver region of the test image is further determined based on the generated atlas. A rough segmentation is obtained by a maximum a posteriori classification of probability map, and the final liver segmentation is produced by a shape-intensity prior level set in the most likely liver region. Our method is evaluated and demonstrated on 25 test CT datasets from our partner site, and its results are compared with two state-of-the-art liver segmentation methods. Moreover, our performance results on 10 MICCAI test datasets are submitted to the organizers for comparison with the other automatic algorithms. Using the 25 test CT datasets, average symmetric surface distance is [Formula: see text] mm (range 0.62-2.12 mm), root mean square symmetric surface distance error is [Formula: see text] mm (range 0.97-3.01 mm), and maximum symmetric surface distance error is [Formula: see text] mm (range 12.73-26.67 mm) by our method. Our method on 10 MICCAI test data sets ranks 10th in all the 47 automatic algorithms on the site as of July 2015. Quantitative results, as well as qualitative comparisons of segmentations, indicate that our method is a promising tool to improve the efficiency of both techniques. The applicability of the proposed method to some challenging clinical problems and the segmentation of the liver are demonstrated with good results on both quantitative and qualitative experimentations. This study suggests that the proposed framework can be good enough to replace the time-consuming and tedious slice-by-slice manual segmentation approach.
Performance Analysis of Entropy Methods on K Means in Clustering Process
NASA Astrophysics Data System (ADS)
Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib
2017-12-01
K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.
Calculated dipole moment and energy in collision of a hydrogen molecule and a hydrogen atom
NASA Technical Reports Server (NTRS)
Patch, R. W.
1973-01-01
Calculations were carried out using three Slater-type 1s orbitals in the orthogonalized valencebond theory of McWeeny. Each orbital exponent was optimized, the H2 internuclear distance was varied from 7.416 x 10 to the -11th power to 7.673 x 10 to the -11th power m (1.401 to 1.450 bohrs). The intermolecular distance was varied from 1 to 4 bohrs (0.5292 to 2.117 x 10 to the 10th power). Linear, scalene, and isosceles configurations were used. A weighted average of the interaction energies was taken for each intermolecular distance. Although energies are tabulated, the principal purpose was to calculate the electric dipole moment and its derivative with respect to H2 internuclear distance.
Huang, Rongyong; Zheng, Shunyi; Hu, Kun
2018-06-01
Registration of large-scale optical images with airborne LiDAR data is the basis of the integration of photogrammetry and LiDAR. However, geometric misalignments still exist between some aerial optical images and airborne LiDAR point clouds. To eliminate such misalignments, we extended a method for registering close-range optical images with terrestrial LiDAR data to a variety of large-scale aerial optical images and airborne LiDAR data. The fundamental principle is to minimize the distances from the photogrammetric matching points to the terrestrial LiDAR data surface. Except for the satisfactory efficiency of about 79 s per 6732 × 8984 image, the experimental results also show that the unit weighted root mean square (RMS) of the image points is able to reach a sub-pixel level (0.45 to 0.62 pixel), and the actual horizontal and vertical accuracy can be greatly improved to a high level of 1/4⁻1/2 (0.17⁻0.27 m) and 1/8⁻1/4 (0.10⁻0.15 m) of the average LiDAR point distance respectively. Finally, the method is proved to be more accurate, feasible, efficient, and practical in variety of large-scale aerial optical image and LiDAR data.
Disability Affects the 6-Minute Walking Distance in Obese Subjects (BMI>40 kg/m2)
Donini, Lorenzo Maria; Poggiogalle, Eleonora; Mosca, Veronica; Pinto, Alessandro; Brunani, Amelia; Capodaglio, Paolo
2013-01-01
Introduction In obese subjects, the relative reduction of the skeletal muscle strength, the reduced cardio-pulmonary capacity and tolerance to effort, the higher metabolic costs and, therefore, the increased inefficiency of gait together with the increased prevalence of co-morbid conditions might interfere with walking. Performance tests, such as the six-minute walking test (6MWT), can unveil the limitations in cardio-respiratory and motor functions underlying the obesity-related disability. Therefore the aims of the present study were: to explore the determinants of the 6-minute walking distance (6MWD) and to investigate the predictors of interruption of the walk test in obese subjects. Methods Obese patients [body mass index (BMI)>40 kg/m2] were recruited from January 2009 to December 2011. Anthropometry, body composition, specific questionnaire for Obesity-related Disabilities (TSD-OC test), fitness status and 6MWT data were evaluated. The correlation between the 6MWD and the potential independent variables (anthropometric parameters, body composition, muscle strength, flexibility and disability) were analysed. The variables which were singularly correlated with the response variable were included in a multivariated regression model. Finally, the correlation between nutritional and functional parameters and test interruption was investigated. Results 354 subjects (87 males, mean age 48.5±14 years, 267 females, mean age 49.8±15 years) were enrolled in the study. Age, weight, height, BMI, fat mass and fat free mass indexes, handgrip strength and disability were significantly correlated with the 6MWD and considered in the multivariate analysis. The determination coefficient of the regression analysis ranged from 0.21 to 0.47 for the different models. Body weight, BMI, waist circumference, TSD-OC test score and flexibility were found to be predictors of the 6MWT interruption. Discussion The present study demonstrated the impact of disability in obese subjects, together with age, anthropometric data, body composition and strength, on the 6-minute walking distance. PMID:24146756
Accurate airway centerline extraction based on topological thinning using graph-theoretic analysis.
Bian, Zijian; Tan, Wenjun; Yang, Jinzhu; Liu, Jiren; Zhao, Dazhe
2014-01-01
The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.
Sibsonian and non-Sibsonian natural neighbour interpolation of the total electron content value
NASA Astrophysics Data System (ADS)
Kotulak, Kacper; Froń, Adam; Krankowski, Andrzej; Pulido, German Olivares; Henrandez-Pajares, Manuel
2017-03-01
In radioastronomy the interferometric measurement between radiotelescopes located relatively close to each other helps removing ionospheric effects. Unfortunately, in case of networks such as LOw Frequency ARray (LOFAR), due to long baselines (currently up to 1500 km), interferometric methods fail to provide sufficiently accurate ionosphere delay corrections. Practically it means that systems such as LOFAR need external ionosphere information, coming from Global or Regional Ionospheric Maps (GIMs or RIMs, respectively). Thanks to the technology based on Global Navigation Satellite Systems (GNSS), the scientific community is provided with ionosphere sounding virtually worldwide. In this paper we compare several interpolation methods for RIMs computation based on scattered Vertical Total Electron Content measurements located on one thin ionospheric layer (Ionospheric Pierce Points—IPPs). The results of this work show that methods that take into account the topology of the data distribution (e.g., natural neighbour interpolation) perform better than those based on geometric computation only (e.g., distance-weighted methods).
Linguistic hesitant fuzzy multi-criteria decision-making method based on evidential reasoning
NASA Astrophysics Data System (ADS)
Zhou, Huan; Wang, Jian-qiang; Zhang, Hong-yu; Chen, Xiao-hong
2016-01-01
Linguistic hesitant fuzzy sets (LHFSs), which can be used to represent decision-makers' qualitative preferences as well as reflect their hesitancy and inconsistency, have attracted a great deal of attention due to their flexibility and efficiency. This paper focuses on a multi-criteria decision-making approach that combines LHFSs with the evidential reasoning (ER) method. After reviewing existing studies of LHFSs, a new order relationship and Hamming distance between LHFSs are introduced and some linguistic scale functions are applied. Then, the ER algorithm is used to aggregate the distributed assessment of each alternative. Subsequently, the set of aggregated alternatives on criteria are further aggregated to get the overall value of each alternative. Furthermore, a nonlinear programming model is developed and genetic algorithms are used to obtain the optimal weights of the criteria. Finally, two illustrative examples are provided to show the feasibility and usability of the method, and comparison analysis with the existing method is made.
Methods for Assessment of Memory Reactivation.
Liu, Shizhao; Grosmark, Andres D; Chen, Zhe
2018-04-13
It has been suggested that reactivation of previously acquired experiences or stored information in declarative memories in the hippocampus and neocortex contributes to memory consolidation and learning. Understanding memory consolidation depends crucially on the development of robust statistical methods for assessing memory reactivation. To date, several statistical methods have seen established for assessing memory reactivation based on bursts of ensemble neural spike activity during offline states. Using population-decoding methods, we propose a new statistical metric, the weighted distance correlation, to assess hippocampal memory reactivation (i.e., spatial memory replay) during quiet wakefulness and slow-wave sleep. The new metric can be combined with an unsupervised population decoding analysis, which is invariant to latent state labeling and allows us to detect statistical dependency beyond linearity in memory traces. We validate the new metric using two rat hippocampal recordings in spatial navigation tasks. Our proposed analysis framework may have a broader impact on assessing memory reactivations in other brain regions under different behavioral tasks.
Geographies of an Online Social Network
Lengyel, Balázs; Varga, Attila; Ságvári, Bence; Jakobi, Ákos; Kertész, János
2015-01-01
How is online social media activity structured in the geographical space? Recent studies have shown that in spite of earlier visions about the “death of distance”, physical proximity is still a major factor in social tie formation and maintenance in virtual social networks. Yet, it is unclear, what are the characteristics of the distance dependence in online social networks. In order to explore this issue the complete network of the former major Hungarian online social network is analyzed. We find that the distance dependence is weaker for the online social network ties than what was found earlier for phone communication networks. For a further analysis we introduced a coarser granularity: We identified the settlements with the nodes of a network and assigned two kinds of weights to the links between them. When the weights are proportional to the number of contacts we observed weakly formed, but spatially based modules resemble to the borders of macro-regions, the highest level of regional administration in the country. If the weights are defined relative to an uncorrelated null model, the next level of administrative regions, counties are reflected. PMID:26359668
Satellite Telemetry and Long-Range Bat Movements
Smith, Craig S.; Epstein, Jonathan H.; Breed, Andrew C.; Plowright, Raina K.; Olival, Kevin J.; de Jong, Carol; Daszak, Peter; Field, Hume E.
2011-01-01
Background Understanding the long-distance movement of bats has direct relevance to studies of population dynamics, ecology, disease emergence, and conservation. Methodology/Principal Findings We developed and trialed several collar and platform terminal transmitter (PTT) combinations on both free-living and captive fruit bats (Family Pteropodidae: Genus Pteropus). We examined transmitter weight, size, profile and comfort as key determinants of maximized transmitter activity. We then tested the importance of bat-related variables (species size/weight, roosting habitat and behavior) and environmental variables (day-length, rainfall pattern) in determining optimal collar/PTT configuration. We compared battery- and solar-powered PTT performance in various field situations, and found the latter more successful in maintaining voltage on species that roosted higher in the tree canopy, and at lower density, than those that roost more densely and lower in trees. Finally, we trialed transmitter accuracy, and found that actual distance errors and Argos location class error estimates were in broad agreement. Conclusions/Significance We conclude that no single collar or transmitter design is optimal for all bat species, and that species size/weight, species ecology and study objectives are key design considerations. Our study provides a strategy for collar and platform choice that will be applicable to a larger number of bat species as transmitter size and weight continue to decrease in the future. PMID:21358823
Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O
2018-06-01
In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.
Groneberg, David A.
2016-01-01
We integrated recent improvements within the floating catchment area (FCA) method family into an integrated ‘iFCA`method. Within this method we focused on the distance decay function and its parameter. So far only distance decay functions with constant parameters have been applied. Therefore, we developed a variable distance decay function to be used within the FCA method. We were able to replace the impedance coefficient β by readily available distribution parameter (i.e. median and standard deviation (SD)) within a logistic based distance decay function. Hence, the function is shaped individually for every single population location by the median and SD of all population-to-provider distances within a global catchment size. Theoretical application of the variable distance decay function showed conceptually sound results. Furthermore, the existence of effective variable catchment sizes defined by the asymptotic approach to zero of the distance decay function was revealed, satisfying the need for variable catchment sizes. The application of the iFCA method within an urban case study in Berlin (Germany) confirmed the theoretical fit of the suggested method. In summary, we introduced for the first time, a variable distance decay function within an integrated FCA method. This function accounts for individual travel behaviors determined by the distribution of providers. Additionally, the function inherits effective variable catchment sizes and therefore obviates the need for determining variable catchment sizes separately. PMID:27391649
Design for robustness of unique, multi-component engineering systems
NASA Astrophysics Data System (ADS)
Shelton, Kenneth A.
2007-12-01
The purpose of this research is to advance the science of conceptual designing for robustness in unique, multi-component engineering systems. Robustness is herein defined as the ability of an engineering system to operate within a desired performance range even if the actual configuration has differences from specifications within specified tolerances. These differences are caused by three sources, namely manufacturing errors, system degradation (operational wear and tear), and parts availability. Unique, multi-component engineering systems are defined as systems produced in unique or very small production numbers. They typically have design and manufacturing costs on the order of billions of dollars, and have multiple, competing performance objectives. Design time for these systems must be minimized due to competition, high manpower costs, long manufacturing times, technology obsolescence, and limited available manpower expertise. Most importantly, design mistakes cannot be easily corrected after the systems are operational. For all these reasons, robustness of these systems is absolutely critical. This research examines the space satellite industry in particular. Although inherent robustness assurance is absolutely critical, it is difficult to achieve in practice. The current state of the art for robustness in the industry is to overdesign components and subsystems with redundancy and margin. The shortfall is that it is not known if the added margins were either necessary or sufficient given the risk management preferences of the designer or engineering system customer. To address this shortcoming, new assessment criteria to evaluate robustness in design concepts have been developed. The criteria are comprised of the "Value Distance", addressing manufacturing errors and system degradation, and "Component Distance", addressing parts availability. They are based on an evolutionary computation format that uses a string of alleles to describe the components in the design concept. These allele values are unitless themselves, but map to both configuration descriptions and attribute values. The Value Distance and Component Distance are metrics that measure the relative differences between two design concepts using the allele values, and all differences in a population of design concepts are calculated relative to a reference design, called the "base design". The base design is the top-ranked member of the population in weighted terms of robustness and performance. Robustness is determined based on the change in multi-objective performance as Value Distance and Component Distance (and thus differences in design) increases. It is assessed as acceptable if differences in design configurations up to specified tolerances result in performance changes that remain within a specified performance range. The design configuration difference tolerances and performance range together define the designer's risk management preferences for the final design concepts. Additionally, a complementary visualization capability was developed, called the "Design Solution Topography". This concept allows the visualization of a population of design concepts, and is a 3-axis plot where each point represents an entire design concept. The axes are the Value Distance, Component Distance and Performance Objective. The key benefit of the Design Solution Topography is that it allows the designer to visually identify and interpret the overall robustness of the current population of design concepts for a particular performance objective. In a multi-objective problem, each performance objective has its own Design Solution Topography view. These new concepts are implemented in an evolutionary computation-based conceptual designing method called the "Design for Robustness Method" that produces robust design concepts. The design procedures associated with this method enable designers to evaluate and ensure robustness in selected designs that also perform within a desired performance range. The method uses an evolutionary computation-based procedure to generate populations of large numbers of alternative design concepts, which are assessed for robustness using the Value Distance, Component Distance and Design Solution Topography procedures. The Design for Robustness Method provides a working conceptual designing structure in which to implement and gain the benefits of these new concepts. In the included experiments, the method was used on several mathematical examples to demonstrate feasibility, which showed favorable results as compared to existing known methods. Furthermore, it was tested on a real-world satellite conceptual designing problem to illustrate the applicability and benefits to industry. Risk management insights were demonstrated for the robustness-related issues of manufacturing errors, operational degradation, parts availability, and impacts based on selections of particular types of components.
Time-dependent polar distribution of outgassing from a spacecraft
NASA Technical Reports Server (NTRS)
Scialdone, J. J.
1974-01-01
A technique has been developed to obtain a characterization of the self-generated environment of a spacecraft and its variation with time, angular position, and distance. The density, pressure, outgassing flux, total weight loss, and other important parameters were obtained from data provided by two mass measuring crystal microbalances, mounted back to back, at distance of 1 m from the spacecraft equivalent surface. A major outgassing source existed at an angular position of 300 deg to 340 deg, near the rocket motor, while the weakest source was at the antennas. The strongest source appeared to be caused by a material diffusion process which produced a directional density at 1 m distance of about 1.6 x 10 to the 11th power molecules/cu cm after 1 hr in vacuum and decayed to 1.6 x 10 to the 9th power molecules/cu cm after 200 hr. The total average outgassing flux at the same distance and during the same time span changed from 1.2 x 10 to the minus 7th power to 1.4 x to the minus 10th power g/sq cm/s. These values are three times as large at the spacecraft surface. Total weight loss was 537 g after 10 hr and about 833 g after 200 hr. Self-contamination of the spacecraft was equivalent to that in orbit at about 300-km altitude.
Box codes of lengths 48 and 72
NASA Technical Reports Server (NTRS)
Solomon, G.; Jin, Y.
1993-01-01
A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.
Self-selection accounts for inverse association between weight and cardiorespiratory fitness.
Williams, Paul T
2008-01-01
Men and women who exercise regularly and who are physically fit tend to be leaner than those who are sedentary and not fit. Although exercise is known to attenuate weight gain and promote weight loss, there may also be a propensity for leaner men and women to choose to exercise vigorously (self-selection). Pre-exercise body weights have been shown to account for all the weight differences between fast and slow walkers, but seem to account for only a portion of the weight differences associated with walking distances. Whether these results apply to maximum exercise performance (i.e., cardiorespiratory fitness) as well as to doses of vigorous exercise (metabolic equivalents >6) remains to be determined. Assess whether the cross-sectional relationships of BMI to cardiorespiratory fitness and vigorous activity are explained by BMI prior to exercising. Cross-sectional study of the relationships between cardiorespiratory fitness (running speed during 10 km foot race) and vigorous physical activity (weekly running distance) to current BMI (BMI(current)) and BMI at the start of running (BMI(starting)) in 44,370 male and 25,252 female participants of the National Runners' Health Study. BMI(starting) accounted entirely for the association between fitness and BMI(current) in both sexes, but only a quarter of the association between vigorous physical activity levels and BMI(current) in men. In women, BMI(starting) accounted for 58% of the association between BMI(current) and vigorous activity levels. Self-selection based on pre-exercise BMI accounts entirely for the association found between fitness and BMI (and possibly a portion of other health outcomes).
The Association of Ambient Air Pollution and Physical Inactivity in the United States
Roberts, Jennifer D.; Voss, Jameson D.; Knight, Brandon
2014-01-01
Background Physical inactivity, ambient air pollution and obesity are modifiable risk factors for non-communicable diseases, with the first accounting for 10% of premature deaths worldwide. Although community level interventions may target each simultaneously, research on the relationship between these risk factors is lacking. Objectives After comparing spatial interpolation methods to determine the best predictor for particulate matter (PM2.5; PM10) and ozone (O3) exposures throughout the U.S., we evaluated the cross-sectional association of ambient air pollution with leisure-time physical inactivity among adults. Methods In this cross-sectional study, we assessed leisure-time physical inactivity using individual self-reported survey data from the Centers for Disease Control and Prevention's 2011 Behavioral Risk Factor Surveillance System. These data were combined with county-level U.S. Environmental Protection Agency air pollution exposure estimates using two interpolation methods (Inverse Distance Weighting and Empirical Bayesian Kriging). Finally, we evaluated whether those exposed to higher levels of air pollution were less active by performing logistic regression, adjusting for demographic and behavioral risk factors, and after stratifying by body weight category. Results With Empirical Bayesian Kriging air pollution values, we estimated a statistically significant 16–35% relative increase in the odds of leisure-time physical inactivity per exposure class increase of PM2.5 in the fully adjusted model across the normal weight respondents (p-value<0.0001). Evidence suggested a relationship between the increasing dose of PM2.5 exposure and the increasing odds of physical inactivity. Conclusions In a nationally representative, cross-sectional sample, increased community level air pollution is associated with reduced leisure-time physical activity particularly among the normal weight. Although our design precludes a causal inference, these results provide additional evidence that air pollution should be investigated as an environmental determinant of inactivity. PMID:24598907
Furuya, Ken; Akiyama, Shinji; Nambu, Atushi; Suzuki, Yutaka; Hasebe, Yuusuke
2017-01-01
We aimed to apply the pediatric abdominal CT protocol of Donnelly et al. in the United States to the pediatric abdominal CT-AEC. Examining CT images of 100 children, we found that the sectional area of the hepatic portal region (y) was strongly correlated with the body weight (x) as follows: y=7.14x + 84.39 (correlation coefficient=0.9574). We scanned an elliptical cone phantom that simulates the human body using a pediatric abdominal CT scanning method of Donnelly et al. in, and measured SD values. We further scanned the same phantom under the settings for adult CT-AEC scan and obtained the relationship between the sectional areas (y) and the SD values. Using these results, we obtained the following preset noise factors for CT-AEC at each body weight range: 6.90 at 4.5-8.9 kg, 8.40 at 9.0-17.9 kg, 8.68 at 18.0-26.9 kg, 9.89 at 27.0-35.9 kg, 12.22 at 36.0-45.0 kg, 13.52 at 45.1-70.0 kg, 15.29 at more than 70 kg. From the relation between age, weight and the distance of liver and tuber ischiadicum of 500 children, we obtained the CTDI vol values and DLP values under the scanning protocol of Donnelly et al. Almost all of DRL from these values turned out to be smaller than the DRL data of IAEA and various countries. Thus, by setting the maximum current values of CT-AEC to be the Donnelly et al.'s age-wise current values, and using our weight-wise noise factors, we think we can perform pediatric abdominal CT-AEC scans that are consistent with the same radiation safety and the image quality as those proposed by Donnelly et al.
Williams, C.J.; Heglund, P.J.
2009-01-01
Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.
NASA Astrophysics Data System (ADS)
Crespi, Alice; Brunetti, Michele; Maugeri, Maurizio
2017-04-01
The availability of gridded high-resolution spatial climatologies and corresponding secular records has acquired an increasing importance in the recent years both to research purposes and as decision-support tools in the management of natural resources and economical activities. High-resolution monthly precipitation climatologies for Italy were computed by gridding on a 30-arc-second-resolution Digital Elevation Model (DEM) the precipitation normals (1961-1990) obtained from a quality-controlled dataset of about 6200 stations covering the Italian surface and part of the Northern neighbouring regions. Starting from the assumption that the precipitation distribution is strongly influenced by orography, especially elevation, a local weighted linear regression (LWLR) of precipitation versus elevation was performed at each DEM cell. The regression coefficients for each cell were estimated by selecting the stations with the highest weights in which the distances and the level of similarity between the station cells and the considered grid cell, in terms of orographic features, are taken into account. An optimisation procedure was then set up in order to define, for each month and for each grid cell, the most suitable decreasing coefficients for the weighting factors which enter in the LWLR scheme. The model was validated by the comparison with the results provided by inverse distance weighting (IDW) applied both to station normals and to the residuals of a global regression of station normals versus elevation. In both cases, the LWLR leave-one-out reconstructions show the best agreement with the observed station normals, especially when considering specific station clusters (high elevation sites for example). After producing the high-resolution precipitation climatological field, the temporal component on the high-resolution grid was obtained by following the anomaly method. It is based on the assumption that the spatio-temporal structure of the signal of a meteorological variable over a certain area can be described by the superimposition of two independent fields: the climatologies and the anomalies, i.e. the departures from the normal values. The secular precipitation anomaly records were thus estimated for each cell of the grid by averaging the anomaly values of neighbouring stations, by means of Gaussian weighting functions, taking into account both the distance and the elevation differences between the stations and the considered grid cell. The local secular precipitation records were then obtained by multiplying the local estimated anomalies for the corresponding 1961-1990 normals. To compute the anomaly field, a different dataset was used by selecting the stations with the longest series and extending them both to the past, retrieving data from non-digitised archives, and to the more recent decades. In particular, after a careful procedure of updating, quality-check and homogenisation of series, this methodology was applied on two Italian areas characterised by very different orography: Sardinia region and the Alpine areas within Adda basin.
Distance learning in academic health education.
Mattheos, N; Schittek, M; Attström, R; Lyon, H C
2001-05-01
Distance learning is an apparent alternative to traditional methods in education of health care professionals. Non-interactive distance learning, interactive courses and virtual learning environments exist as three different generations in distance learning, each with unique methodologies, strengths and potential. Different methodologies have been recommended for distance learning, varying from a didactic approach to a problem-based learning procedure. Accreditation, teamwork and personal contact between the tutors and the students during a course provided by distance learning are recommended as motivating factors in order to enhance the effectiveness of the learning. Numerous assessment methods for distance learning courses have been proposed. However, few studies report adequate tests for the effectiveness of the distance-learning environment. Available information indicates that distance learning may significantly decrease the cost of academic health education at all levels. Furthermore, such courses can provide education to students and professionals not accessible by traditional methods. Distance learning applications still lack the support of a solid theoretical framework and are only evaluated to a limited extent. Cases reported so far tend to present enthusiastic results, while more carefully-controlled studies suggest a cautious attitude towards distance learning. There is a vital need for research evidence to identify the factors of importance and variables involved in distance learning. The effectiveness of distance learning courses, especially in relation to traditional teaching methods, must therefore be further investigated.
2012-01-01
Background Motorised travel and associated carbon dioxide (CO2) emissions generate substantial health costs; in the case of motorised travel, this may include contributing to rising obesity levels. Obesity has in turn been hypothesised to increase motorised travel and/or CO2 emissions, both because heavier people may use motorised travel more and because heavier people may choose larger and less fuel-efficient cars. These hypothesised associations have not been examined empirically, however, nor has previous research examined associations with other health characteristics. Our aim was therefore to examine how and why weight status, health, and physical activity are associated with transport CO2 emissions. Methods 3463 adults completed questionnaires in the baseline iConnect survey at three study sites in the UK, reporting their health, weight, height and past-week physical activity. Seven-day recall instruments were used to assess travel behaviour and, together with data on car characteristics, were used to estimate CO2 emissions. We used path analysis to examine the extent to which active travel, motorised travel and car engine size explained associations between health characteristics and CO2 emissions. Results CO2 emissions were higher in overweight or obese participants (multivariable standardized probit coefficients 0.16, 95% CI 0.08 to 0.25 for overweight vs. normal weight; 0.16, 95% CI 0.04 to 0.28 for obese vs. normal weight). Lower active travel and, particularly for obesity, larger car engine size explained 19-31% of this effect, but most of the effect was directly explained by greater distance travelled by motor vehicles. Walking for recreation and leisure-time physical activity were associated with higher motorised travel distance and therefore higher CO2 emissions, while active travel was associated with lower CO2 emissions. Poor health and illness were not independently associated with CO2 emissions. Conclusions Establishing the direction of causality between weight status and travel behaviour requires longitudinal data, but the association with engine size suggests that there may be at least some causal effect of obesity on CO2 emissions. More generally, transport CO2 emissions are associated in different ways with different health-related characteristics. These include associations between health goods and environmental harms (recreational physical activity and high emissions), indicating that environment-health ‘co-benefits’ cannot be assumed. Instead, attention should also be paid to identifying and mitigating potential areas of tension, for example by promoting low-carbon recreational physical activity. PMID:22862811
Detecting cis-regulatory binding sites for cooperatively binding proteins
van Oeffelen, Liesbeth; Cornelis, Pierre; Van Delm, Wouter; De Ridder, Fedor; De Moor, Bart; Moreau, Yves
2008-01-01
Several methods are available to predict cis-regulatory modules in DNA based on position weight matrices. However, the performance of these methods generally depends on a number of additional parameters that cannot be derived from sequences and are difficult to estimate because they have no physical meaning. As the best way to detect cis-regulatory modules is the way in which the proteins recognize them, we developed a new scoring method that utilizes the underlying physical binding model. This method requires no additional parameter to account for multiple binding sites; and the only necessary parameters to model homotypic cooperative interactions are the distances between adjacent protein binding sites in basepairs, and the corresponding cooperative binding constants. The heterotypic cooperative binding model requires one more parameter per cooperatively binding protein, which is the concentration multiplied by the partition function of this protein. In a case study on the bacterial ferric uptake regulator, we show that our scoring method for homotypic cooperatively binding proteins significantly outperforms other PWM-based methods where biophysical cooperativity is not taken into account. PMID:18400778
Zou, He; Zhu, Xiuruo; Zhang, Jia; Wang, Yi; Wu, Xiaozhen; Liu, Fang; Xie, Xiaofeng
2017-01-01
Background The six-minute walk test (6MWT) is a safe, simple, inexpensive tool for evaluating the functional exercise capacity of patients with chronic respiratory disease. However, there is a lack of standard reference equations for the six-minute walk distance (6MWD) in the healthy Chinese population aged 18–59 years. Aims The purposes of the present study were as follows: 1) to measure the anthropometric data and walking distance of a sample of healthy Chinese Han people aged 18–59 years; 2) to construct reference equations for the 6MWD; 3) to compare the measured 6MWD with previously published equations. Method The anthropometric data, demographic information, lung function, and walking distance of Chinese adults aged 18–59 years were prospectively measured using a standardized protocol. We obtained verbal consent from all the subjects before the test, and the study design was approved by the ethics committee of Wenzhou People's Hospital. The 6MWT was performed twice, and the longer distance was used for further analysis. Results A total of 643 subjects (319 females and 324 males) completed the 6MWT, and average walking distance was 601.6±55.51 m. The walking distance was compared between females and males (578±49.85 m vs. 623±52.53 m; p < 0.0001) and between physically active subjects and sedentary subjects (609.3±56.17 m vs. 592±53.23 m; p < 0.0001). Pearson’s correlation indicated that the 6MWD was significantly correlated with various demographic and the 6MWT variables, such as age, height, weight, body mass index (BMI), heart rate after the test and the difference in the heart rate before and after the test. Stepwise multiple regression analysis showed that age and height were independent predictors associated with the 6MWD. The reference equations from white, Canadian and Chilean populations tended to overestimate the walking distance in our subjects, while Brazilian and Arabian equations tended to underestimate the walking distance. There was no significant difference in the walking distance between Korean reference equations and the results of the current study. Conclusion In summary, age and height were the most significant predictors of the 6MWD, and regression equations could explain approximately 34% and 28% of the distance variance in the female and male groups, respectively. PMID:28910353
ERIC Educational Resources Information Center
Wonacott, Michael E.
Both face-to-face and distance learning methods are currently being used in adult education and career and technical education. In theory, the advantages of face-to-face and distance learning methods complement each other. In practice, however, both face-to-face and information and communications technology (ICT)-based distance programs often rely…
Do minority and poor neighborhoods have higher access to fast-food restaurants in the United States?
James, Peter; Arcaya, Mariana C.; Parker, Devin M.; Tucker-Seeley, Reginald
2016-01-01
Background Disproportionate access to unhealthy foods in poor or minority neighborhoods may be a primary determinant of obesity disparities. We investigated whether fast-food access varies by Census block group (CBG) percent black and poverty. Methods We measured the average driving distance from each CBG population-weighted centroid to the five closest top ten fast-food chains and CBG percent black and percent below poverty Results Among 209,091 CBGs analyzed (95.1% of all US CBGs), CBG percent black was positively associated with fast-food access controlling for population density and percent poverty (average distance to fast food was 3.56 miles closer (95% CI: -3.64, -3.48) in CBGs with the highest versus lowest quartile of percentage of black residents). Poverty was not independently associated with fast-food access. The relationship between fast-food access and race was stronger in CBGs with higher levels of poverty (p for interaction <0.0001). Conclusions Predominantly black neighborhoods had higher access to fast-food while poverty was not an independent predictor of fast-food access. PMID:24945103
Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data
NASA Astrophysics Data System (ADS)
Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.
2018-04-01
With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.
Effect of dopant on electrical properties of PVA doped NaF polymer electrolyte films
NASA Astrophysics Data System (ADS)
Irfan, Mohammed; Razikha Banu, S.; Manjunath, A.; Mahesh, S. S.
2018-05-01
Polymer electrolyte films of Poly (vinyl alcohol) (PVA) doped with Sodium fluoride (NaF) of different weight ratios (6, 8 and 10 wt %) were prepared by solution casting method. We observed that AC conductivity was found to increase with rise in temperature and frequency with slope S ranging from 0.7 to 0.9. The Correlated Barrier Hopping (CBH) model is used because the value of S is temperature dependent and its value decreases by increasing temperature. The dielectric constant has high values in the low frequency region; this is due to the existence of various types of polarization mechanisms. The X-ray diffraction (XRD) diagram of pure PVA shows a characteristic peak at 2θ=19.490 indicating its semi-crystalline nature. On the incorporation of NaF salt into the polymer, the intensity of peak decreases gradually, suggesting a decrease in the degree of crystallinity of the complex. The CBH model is used to calculate the polaron binding energy (WM), the Hoping distance (R), the minimum hoping distance (Rmin) and the activation energy (Ea) results are discussed.
NASA Astrophysics Data System (ADS)
Shekar, B. H.; Bhat, S. S.
2017-05-01
Locating the boundary parameters of pupil and iris and segmenting the noise free iris portion are the most challenging phases of an automated iris recognition system. In this paper, we have presented person authentication frame work which uses particle swarm optimization (PSO) to locate iris region and circular hough transform (CHT) to device the boundary parameters. To undermine the effect of the noise presented in the segmented iris region we have divided the candidate region into N patches and used Fuzzy c-means clustering (FCM) to classify the patches into best iris region and not so best iris region (noisy region) based on the probability density function of each patch. Weighted mean Hammimng distance is adopted to find the dissimilarity score between the two candidate irises. We have used Log-Gabor, Riesz and Taylor's series expansion (TSE) filters and combinations of these three for iris feature extraction. To justify the feasibility of the proposed method, we experimented on the three publicly available data sets IITD, MMU v-2 and CASIA v-4 distance.
SLOPE STABILITY EVALUATION AND EQUIPMENT SETBACK DISTANCES FOR BURIAL GROUND EXCAVATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
MCSHANE DS
2010-03-25
After 1970 Transuranic (TRU) and suspect TRU waste was buried in the ground with the intention that at some later date the waste would be retrieved and processed into a configuration for long term storage. To retrieve this waste the soil must be removed (excavated). Sloping the bank of the excavation is the method used to keep the excavation from collapsing and to provide protection for workers retrieving the waste. The purpose of this paper is to document the minimum distance (setback) that equipment must stay from the edge of the excavation to maintain a stable slope. This evaluation examinesmore » the equipment setback distance by dividing the equipment into two categories, (1) equipment used for excavation and (2) equipment used for retrieval. The section on excavation equipment will also discuss techniques used for excavation including the process of benching. Calculations 122633-C-004, 'Slope Stability Analysis' (Attachment A), and 300013-C-001, 'Crane Stability Analysis' (Attachment B), have been prepared to support this evaluation. As shown in the calculations the soil has the following properties: Unit weight 110 pounds per cubic foot; and Friction Angle (natural angle of repose) 38{sup o} or 1.28 horizontal to 1 vertical. Setback distances are measured from the top edge of the slope to the wheels/tracks of the vehicles and heavy equipment being utilized. The computer program utilized in the calculation uses the center of the wheel or track load for the analysis and this difference is accounted for in this evaluation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiberi, David A.; Carrier, Jean-Francois; Beauchemin, Marie-Claude
2012-09-01
Purpose: To determine the extent of gold fiducial marker (FM) migration in patients treated for prostate cancer with concurrent androgen deprivation and external-beam radiation therapy (EBRT). Methods and Materials: Three or 4 gold FMs were implanted in 37 patients with prostate adenocarcinoma receiving androgen deprivation therapy (ADT) in conjunction with 70-78 Gy. Androgen deprivation therapy was started a median of 3.9 months before EBRT (range, 0.3-12.5 months). To establish the extent of FM migration, the distance between each FM was calculated for 5-8 treatments once per week throughout the EBRT course. For each treatment, the distance between FMs was comparedmore » with the distance from the digitally reconstructed radiographs generated from the planning CT. A total of 281 treatments were analyzed. Results: The average daily migration was 0.8 {+-} 0.3 mm, with distances ranging from 0.2 mm-2.6 mm. Two of the 281 assessed treatments (0.7%) showed migrations >2 mm. No correlation between FM migration and patient weight or time delay between ADT and start of EBRT was found. There was no correlation between the extent of FM migration and prostate volume. Conclusion: This is the largest report of implanted FM migration in patients receiving concomitant ADT. Only 0.7% of the 281 treatments studied had significant marker migrations (>2 mm) throughout the course of EBRT. Consequently, the use of implanted FMs in these patients enables accurate monitoring of prostate gland position during treatment.« less
30 CFR 77.1608 - Dumping facilities.
Code of Federal Regulations, 2011 CFR
2011-07-01
... weight of a loaded dump truck, trucks shall be dumped a safe distance back from the edge of the bank. (c... material. (d) Grizzlies, grates, and other sizing devices at dump and transfer points shall be anchored...
Selecting a pharmacy layout design using a weighted scoring system.
McDowell, Alissa L; Huang, Yu-Li
2012-05-01
A weighted scoring system was used to select a pharmacy layout redesign. Facilities layout design techniques were applied at a local hospital pharmacy using a step-by-step design process. The process involved observing and analyzing the current situation, observing the current available space, completing activity flow charts of the pharmacy processes, completing communication and material relationship charts to detail which areas in the pharmacy were related to one another and how they were related, researching applications in other pharmacies or in scholarly works that could be beneficial, numerically defining space requirements for areas within the pharmacy, measuring the available space within the pharmacy, developing a set of preliminary designs, and modifying preliminary designs so they were all acceptable to the pharmacy staff. To select a final layout that could be implemented in the pharmacy, those layouts were compared via a weighted scoring system. The weighted aspect further allowed additional emphasis on categories based on their effect on pharmacy performance. The results produced a beneficial layout design as determined through simulated models of the pharmacy operation that more effectively allocated and strategically located space to improve transportation distances and materials handling, employee utilization, and ergonomics. Facilities layout designs for a hospital pharmacy were evaluated using a weighted scoring system to identify a design that was superior to both the current layout and alternative layouts in terms of feasibility, cost, patient safety, employee safety, flexibility, robustness, transportation distance, employee utilization, objective adherence, maintainability, usability, and environmental impact.
Effect of emerging technology on a convertible, business/interceptor, supersonic-cruise jet
NASA Technical Reports Server (NTRS)
Beissner, F. L., Jr.; Lovell, W. A.; Robins, A. W.; Swanson, E. E.
1986-01-01
This study was initiated to assess the feasibility of an eight-passenger, supersonic-cruise long range business jet aircraft that could be converted into a military missile carrying interceptor. The baseline passenger version has a flight crew of two with cabin space for four rows of two passenger seats plus baggage and lavatory room in the aft cabin. The ramp weight is 61,600 pounds with an internal fuel capacity of 30,904 pounds. Utilizing an improved version of a current technology low-bypass ratio turbofan engine, range is 3,622 nautical miles at Mach 2.0 cruise and standard day operating conditions. Balanced field takeoff distance is 6,600 feet and landing distance is 5,170 feet at 44,737 pounds. The passenger section from aft of the flight crew station to the aft pressure bulkhead in the cabin was modified for the interceptor version. Bomb bay type doors were added and volume is sufficient for four advanced air-to-air missiles mounted on a rotary launcher. Missile volume was based on a Phoenix type missile with a weight of 910 pounds per missile for a total payload weight of 3,640 pounds. Structural and equipment weights were adjusted and result in a ramp weight of 63,246 pounds with a fuel load of 30,938 pounds. Based on a typical intercept mission flight profile, the resulting radius is 1,609 nautical miles at a cruise Mach number of 2.0.
Raabe, Joshua K.; Hightower, Joseph E.
2014-01-01
Despite extensive management and research, populations of American Shad Alosa sapidissima have experienced prolonged declines, and uncertainty about the underlying mechanisms causing these declines remains. In the springs of 2007 through 2010, we used a resistance board weir and PIT technology to capture, tag, and track American Shad in the Little River, North Carolina, a tributary to the Neuse River with complete and partial removals of low-head dams. Our objectives were to examine migratory behaviors and estimate weight loss, survival, and abundance during each spawning season. Males typically immigrated earlier than females and also used upstream habitat at a higher percentage, but otherwise exhibited relatively similar migratory patterns. Proportional weight loss displayed a strong positive relationship with both cumulative water temperature during residence time and number of days spent upstream, and to a lesser extent, minimum distance the fish traveled in the river. Surviving emigrating males lost up to 30% of their initial weight and females lost up to 50% of their initial weight, indicating there are potential survival thresholds. Survival for the spawning season was low and estimates ranged from 0.07 to 0.17; no distinct factors (e.g., sex, size, migration distance) that could contribute to survival were detected. Sampled and estimated American Shad abundance increased from 2007 through 2009, but was lower in 2010. Our study provides substantial new information about American Shad spawning that may aid restoration efforts.
Shobugawa, Yugo; Wiafe, Seth A; Saito, Reiko; Suzuki, Tsubasa; Inaida, Shinako; Taniguchi, Kiyosu; Suzuki, Hiroshi
2012-06-19
Annual influenza epidemics occur worldwide resulting in considerable morbidity and mortality. Spreading pattern of influenza is not well understood because it is often hampered by the quality of surveillance data that limits the reliability of analysis. In Japan, influenza is reported on a weekly basis from 5,000 hospitals and clinics nationwide under the scheme of the National Infectious Disease Surveillance. The collected data are available to the public as weekly reports which were summarized into number of patient visits per hospital or clinic in each of the 47 prefectures. From this surveillance data, we analyzed the spatial spreading patterns of influenza epidemics using weekly weighted standard distance (WSD) from the 1999/2000 through 2008/2009 influenza seasons in Japan. WSD is a single numerical value representing the spatial compactness of influenza outbreak, which is small in case of clustered distribution and large in case of dispersed distribution. We demonstrated that the weekly WSD value or the measure of spatial compactness of the distribution of reported influenza cases, decreased to its lowest value before each epidemic peak in nine out of ten seasons analyzed. The duration between the lowest WSD week and the peak week of influenza cases ranged from minus one week to twenty weeks. The duration showed significant negative association with the proportion of influenza A/H3N2 cases in early phase of each outbreak (correlation coefficient was -0.75, P = 0.012) and significant positive association with the proportion of influenza B cases in the early phase (correlation coefficient was 0.64, P = 0.045), but positively correlated with the proportion of influenza A/H1N1 strain cases (statistically not significant). It is assumed that the lowest WSD values just before influenza peaks are due to local outbreak which results in small standard distance values. As influenza cases disperse nationwide and an epidemic reaches its peak, WSD value changed to be a progressively increasing. The spatial distribution of nationwide influenza outbreak was measured by using a novel WSD method. We showed that spreading rate varied by type and subtypes of influenza virus using WSD as a spatial indicator. This study is the first to show a relationship between influenza epidemic trend by type/subtype and spatial distribution of influenza nationwide in Japan.
Jeffrey, P D; Nichol, L W; Smith, G D
1975-01-25
A method is presented by which an experimental record of total concentration as a function of radial distance, obtained in a sedimentation equilibrium experiment conducted with a noninteracting mixture in the absence of a density gradient, may be analyzed to obtain the unimodal distributions of molecular weight and of partial molar volume when these vary concomitantly and continuously. Particular attention is given to the caracterization of classes of lipoproteins exhibiting Gaussian distributions of these quantities, although the analysis is applicable to other types of unimodal distribution. Equations are also formulated permitting the definition of the corresponding distributions of partial specific volume and of density. The analysis procedure is based on a method (employing Laplace transforms) developed previously, but differs from it in that it avoids the necessity of differentiating experimental results, which introduces error. The method offers certain advantages over other procedures used to characterize and compare lipoprotein samples (exhibiting unimodal distributions) with regard to the duration of the experiment, economy of the sample, and, particularly, the ability to define in principle all of the relevant distributions from one sedimentation equilibrium experiment and an external measurement of the weight average partial specific volume. These points and the steps in the analysis procedure are illustrated with experimental results obtained in the sedimentation equilibrium of a sample of human serum low density lipoprotein. The experimental parameters (such as solution density, column height, and angular velocity) used in the conduction of these experiments were selected on the basis of computer-simulated examples, which are also presented. These provide a guide for other workers interested in characterizing lipoproteins of this class.
Weights of Evidence Method for Landslide Susceptibility Mapping in Takengon, Central Aceh, Indonesia
NASA Astrophysics Data System (ADS)
Pamela; Sadisun, Imam A.; Arifianti, Yukni
2018-02-01
Takengon is an area prone to earthquake disaster and landslide. On July 2, 2013, Central Aceh earthquake induced large numbers of landslides in Takengon area, which resulted in casualties of 39 people. This location was chosen to assess the landslide susceptibility of Takengon, using a statistical method, referred to as the weight of evidence (WoE). This WoE model was applied to indicate the main factors influencing the landslide susceptible area and to derive landslide susceptibility map of Takengon. The 251 landslides randomly divided into two groups of modeling/training data (70%) and validation/test data sets (30%). Twelve thematic maps of evidence are slope degree, slope aspect, lithology, land cover, elevation, rainfall, lineament, peak ground acceleration, curvature, flow direction, distance to river and roads used as landslide causative factors. According to the AUC, the significant factor controlling the landslide is the slope, the slope aspect, peak ground acceleration, elevation, lithology, flow direction, lineament, and rainfall respectively. Analytical result verified by using test data of landslide shows AUC prediction rate is 0.819 and AUC success rate with all landslide data included is 0.879. This result showed the selective factors and WoE method as good models for assessing landslide susceptibility. The landslide susceptibility map of Takengon shows the probabilities, which represent relative degrees of susceptibility for landslide proneness in Takengon area.
Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights
NASA Astrophysics Data System (ADS)
Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.
2009-12-01
We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.
NASA Astrophysics Data System (ADS)
Wang, Dandan; Zhao, Gong-Bo; Wang, Yuting; Percival, Will J.; Ruggeri, Rossana; Zhu, Fangzhou; Tojeiro, Rita; Myers, Adam D.; Chuang, Chia-Hsun; Baumgarten, Falk; Zhao, Cheng; Gil-Marín, Héctor; Ross, Ashley J.; Burtin, Etienne; Zarrouk, Pauline; Bautista, Julian; Brinkmann, Jonathan; Dawson, Kyle; Brownstein, Joel R.; de la Macorra, Axel; Schneider, Donald P.; Shafieloo, Arman
2018-06-01
We present a measurement of the anisotropic and isotropic Baryon Acoustic Oscillations (BAO) from the extended Baryon Oscillation Spectroscopic Survey Data Release 14 quasar sample with optimal redshift weights. Applying the redshift weights improves the constraint on the BAO dilation parameter α(zeff) by 17 per cent. We reconstruct the evolution history of the BAO distance indicators in the redshift range of 0.8 < z < 2.2. This paper is part of a set that analyses the eBOSS DR14 quasar sample.
NASA Astrophysics Data System (ADS)
Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.
2008-10-01
We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.
Design and construction of smart cane using infrared laser-based tracking system
NASA Astrophysics Data System (ADS)
Wong, Chi Fung; Phitagragsakul, Narikorn; Jornsamer, Patcharaporn; Kaewmeesri, Pimsin; Jantakot, Pimsunan; Locharoenrat, Kitsakorn
2018-06-01
Our work is aimed to design and construct the smart cane. The infrared laser-based sensor was used as a distance detector and Arduino board was used as a microcontroller. On the other hand, Bluetooth was used as a wireless communicator and MP3 module together with the headset were used as a voice alert player. Our smart cane is a very effective device for the users under the indoor guidance. That is, the obstacle was detectable 3,000 cm away from the blind people. The white cane was assembled with the laser distance sensor and distance alert sensor served as the compact and light-weight device. Distance detection was very fast and precise when the smart cane was tested for the different obstacles, such as human, wall and wooden table under the indoor area.
Fixed Nadir Focus Concentrated Solar Power Applying Reflective Array Tracking Method
NASA Astrophysics Data System (ADS)
Setiawan, B.; DAMayanti, A. M.; Murdani, A.; Habibi, I. I. A.; Wakidah, R. N.
2018-04-01
The Sun is one of the most potential renewable energy develoPMent to be utilized, one of its utilization is for solar thermal concentrators, CSP (Concentrated Solar Power). In CSP energy conversion, the concentrator is as moving the object by tracking the sunlight to reach the focus point. This method need quite energy consumption, because the unit of the concentrators has considerable weight, and use large CSP, means the existence of the usage unit will appear to be wider and heavier. The addition of weight and width of the unit will increase the torque to drive the concentrator and hold the wind gusts. One method to reduce energy consumption is direct the sunlight by the reflective array to nadir through CSP with Reflective Fresnel Lens concentrator. The focus will be below the nadir direction, and the position of concentrator will be fixed position even the angle of the sun’s elevation changes from morning to afternoon. So, the energy concentrated maximally, because it has been protected from wind gusts. And then, the possibility of dAMage and changes in focus construction will not occur. The research study and simulation of the reflective array (mechanical method) will show the reflective angle movement. The distance between reflectors and their angle are controlled by mechatronics. From the simulation using fresnel 1m2, and efficiency of solar energy is 60.88%. In restriction, the intensity of sunlight at the tropical circles 1KW/peak, from 6 AM until 6 PM.
Ashizawa, K; Takahashi, C; Yanagisawa, S
1978-09-01
Longitudinal survey data of stature and body weight from age 7 to 17 were obtained for 100 boys and 100 girls during World War II. The growth rates and the average annual increments were compared with those of children born after the war. Growth attained at age 7 as a percentage of that at age 17 is larger in children of the control group, presumably as a result of an improved environment affecting the growth increment. The age at maximum velocity is six months to one year earlier for the current group of children. Although the maximum velocities for both items and sexes are nearly the same in the groups compared, the total increments are larger in the current group of children. Age, distance, and maximum velocity at adolescent growth spurt were obtained for each child. The mean values were compared according to growth patterns and growth attained at age 7. The "increasing type" growth group has the highest velocity at the greatest distance and the oldest age for stature. Children who were taller or heavier at age 7 have velocity peaks with greater distances.
Propagation of rotational Risley-prism-array-based Gaussian beams in turbulent atmosphere
NASA Astrophysics Data System (ADS)
Chen, Feng; Ma, Haotong; Dong, Li; Ren, Ge; Qi, Bo; Tan, Yufeng
2018-03-01
Limited by the size and weight of prism and optical assembling, Rotational Risley-prism-array system is a simple but effective way to realize high power and superior beam quality of deflecting laser output. In this paper, the propagation of the rotational Risley-prism-array-based Gaussian beam array in atmospheric turbulence is studied in detail. An analytical expression for the average intensity distribution at the receiving plane is derived based on nonparaxial ray tracing method and extended Huygens-Fresnel principle. Power in the diffraction-limited bucket is chosen to evaluate beam quality. The effect of deviation angle, propagation distance and intensity of turbulence on beam quality is studied in detail by quantitative simulation. It reveals that with the propagation distance increasing, the intensity distribution gradually evolves from multiple-petal-like shape into the pattern that contains one main-lobe in the center with multiple side-lobes in weak turbulence. The beam quality of rotational Risley-prism-array-based Gaussian beam array with lower deviation angle is better than its counterpart with higher deviation angle when propagating in weak and medium turbulent (i.e. Cn2 < 10-13m-2/3), the beam quality of higher deviation angle arrays degrades faster as the intensity of turbulence gets stronger. In the case of propagating in strong turbulence, the long propagation distance (i.e. z > 10km ) and deviation angle have no influence on beam quality.